From: <tho...@us...> - 2011-01-09 20:58:16
|
Revision: 4069 http://bigdata.svn.sourceforge.net/bigdata/?rev=4069&view=rev Author: thompsonbry Date: 2011-01-09 20:58:02 +0000 (Sun, 09 Jan 2011) Log Message: ----------- Merge JOURNAL_HA_BRANCH to QUADS_QUERY_BRANCH [2601:4061]. A summary of known test failures follows. I have not yet tested quads query against the bigdata federation (that will require a checkout on a machine with more resources). My focus in creating this summary was to verify the branch-to-branch merge. For the most part, it appears that test failures in the per-merge version of the branch were pre-existing failures in either the JOURNAL_HA_BRANCH or the QUADS_QUERY_BRANCH. Notes appear beneath the test suite to which they apply. When there are no comments, the test suite ran without any failures or errors. suite.addTest( com.bigdata.cache.TestAll.suite() ); TestHardReferenceGlobalLRURecycler#test_concurrentOperations() TestHardReferenceGlobalLRURecyclerWithExplicitDeleteRequired#test_concurrentOperations() suite.addTest( com.bigdata.io.TestAll.suite() ); TestFileChannel#test_transferAllFrom() - hangs under OSX. suite.addTest( com.bigdata.net.TestAll.suite() ); suite.addTest( com.bigdata.config.TestAll.suite() ); suite.addTest( com.bigdata.util.TestAll.suite() ); suite.addTest( com.bigdata.util.concurrent.TestAll.suite() ); 1 failure (before merge under OS X). Ok under Windows after merge. suite.addTest( com.bigdata.striterator.TestAll.suite() ); suite.addTest( com.bigdata.counters.TestAll.suite() ); suite.addTest( com.bigdata.rawstore.TestAll.suite() ); suite.addTest( com.bigdata.btree.TestAll.suite() ); suite.addTest( com.bigdata.concurrent.TestAll.suite() ); suite.addTest( com.bigdata.quorum.TestAll.suite() ); suite.addTest( com.bigdata.ha.TestAll.suite() ); // Note: this has a dependency on the quorum package. suite.addTest(com.bigdata.io.writecache.TestAll.suite()); suite.addTest( com.bigdata.journal.TestAll.suite() ); This failure was also present in the JOURNAL_HA_BRANCH: java.lang.IllegalArgumentException: The commit counter must be greter than zero if there is a commit record: commitRecordAddr=50331651, but commitCounter=0 at com.bigdata.journal.RootBlockView.<init>(RootBlockView.java:471) at com.bigdata.journal.TestRootBlockView.test_ctor_correctRejection(TestRootBlockView.java:404) suite.addTest( com.bigdata.journal.ha.TestAll.suite() ); This gets a lot of test errors in the quads branch (post-merge of course). Those errors are also present in the JOURNAL_HA_BRANCH. suite.addTest( com.bigdata.resources.TestAll.suite() ); suite.addTest( com.bigdata.relation.TestAll.suite() ); suite.addTest( com.bigdata.bop.TestAll.suite() ); suite.addTest( com.bigdata.relation.rule.eval.TestAll.suite() ); suite.addTest( com.bigdata.mdi.TestAll.suite() ); Note: This test suite is now empty - the tests are now invoked from the service test suite (for the HA branch as well). suite.addTest( com.bigdata.service.TestAll.suite() ); - Pre-merge: Takes a long time to run under OSX (2000s, 2 errors; but 280s and 2 failures 1 error on the next CI run (#4)). When run from eclipse, only TestMasterTimeoutIdleTask#test_idleTimeout_LT_chunkTimeout() fails. However, it fails repeatably under OSX (before merge) with assertion error at line #473. However, running under ant it might be running additional tests which interact with zk or jini (verify this). - Post-merge: TestMasterTask#test_writeStartStop2() fails once (passes on retry so this is one of the stochastic problems with that test suite). No other errors. suite.addTest( com.bigdata.bop.fed.TestAll.suite() ); - The only failures are "test_something()" methods. suite.addTest( com.bigdata.sparse.TestAll.suite() ); suite.addTest( com.bigdata.search.TestAll.suite() ); suite.addTest( com.bigdata.bfs.TestAll.suite() ); TestFileMetadataIndex#test_create_update() - fail. TestFileMetadataIndex#test_delete01() - fail TestRangeScan#test_rangeScan() - write test TestRangeDelete#test_rangeDelete() - write test // suite.addTest( com.bigdata.service.mapReduce.TestAll.suite() ); // Jini integration suite.addTest(com.bigdata.jini.TestAll.suite()); - Not tested under eclipse. // RDF suite.addTest(com.bigdata.rdf.TestAll.suite()); - 8 failures before merge under OSX CI build. suite.addTest(com.bigdata.rdf.sail.TestAll.suite()); There are some failures here which are related to the inlining work which Mike is currently performing and to the lack of a port from the JOURNAL_HA_BRANCH of the magic search integration into the SAIL. The failures are summarized below. TestNamedGraphs - leaks journal files (maybe testSearchQuery?). TestSearchQuery#testWithMetadata() - leaks journal files. TestTempTripleStore:: TestSPOStarJoin#testStarJoin1() fails. TestSPOStarJoin#testStarJoin2() fails. TestLocalTripleStore:: TestSPOStarJoin#testStarJoin1() fails. TestSPOStarJoin#testStarJoin2() fails. TestLocalTripleStoreWithoutStatementIdentifiers:: TestSPOStarJoin#testStarJoin1() fails. TestSPOStarJoin#testStarJoin2() fails. TestBigdataSailWithQuads:: TestNamedGraphs#testSearchQuery() fails. TestSearchQuery#testWithMetadata() fails. TestBigdataEvaluationStrategyImpl#test_free_text_search() fails. BigdataConnectionTest#testPreparedTupleQuery2() fails. BigdataSparqlTest#open-cmp-01 fails. BigdataSparqlTest#open-cmp-02 fails. TestBigdataSailWithoutSids:: TestSearchQuery#testWithMetadata() fails. TestBigdataEvaluationStrategyImpl#test_free_text_search() fails. TestBigdataSailWithSids:: TestSearchQuery#testWithMetadata() fails. TestBigdataEvaluationStrategyImpl#test_free_text_search() fails. TestBigdataSailWithSidsWithoutInlining:: TestSearchQuery#testWithMetadata() fails. TestBigdataEvaluationStrategyImpl#test_free_text_search() fails. Next steps are to clean up CI against the post-merge version of the branch and to bring forward some remaining features from the JOURNAL_HA_BRANCH which are related to the SAIL. - MikeP hand reconcile: BigdataEvaluationStrategy2 => BigdataEvaluationStrategy (magic search feature port). - MikeP update IsInline and IsLiteral to the BOp model. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/.project branches/QUADS_QUERY_BRANCH/bigdata/src/architecture/mergePriority.xls branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BytesUtil.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentCheckpoint.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Node.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/keys/IKeyBuilder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/keys/KeyBuilder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractLocalTransactionManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/BufferMode.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/CommitRecord.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ConcurrencyManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/DirectBufferStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/DiskBackedBufferStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/DiskOnlyStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/DumpJournal.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/FileMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/IAtomicStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/IBufferStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/IRootBlockView.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ITransactionService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/JournalTransactionService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/Options.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RWStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RootBlockView.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/TemporaryRawStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/TemporaryStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/TransientBufferStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/WORMStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/AbstractRawStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/IAddressManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/IRawStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/WormAddressManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/AbstractResource.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/accesspath/BlockingBuffer.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/ProgramTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/rule/eval/pipeline/JoinTaskFactoryTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/AllocBlock.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/Allocator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/FixedAllocator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/FixedOutputStream.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/IStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/PSOutputStream.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/StorageTerminalError.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/FullTextIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/Hit.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/Hiterator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/ReadIndexTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/TermFrequencyData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/TermMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/TokenBuffer.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/ChecksumError.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/ChecksumUtility.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentAddressManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentCheckpoint.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/counters/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractBufferStrategyTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractIndexManagerTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractInterruptsTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractJournalTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractMRMWTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractMROWTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractRestartSafeTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/StressTestConcurrentTx.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/StressTestConcurrentUnisolatedIndices.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestAbort.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestConcurrentJournal.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestJournalBasics.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestJournalShutdown.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestRootBlockView.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestWORMStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/rawstore/AbstractRawStoreTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/relation/rule/AbstractRuleTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/relation/rule/TestRule.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/resources/AbstractResourceManagerTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/rwstore/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/search/TestKeyBuilder.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/search/TestPrefixSearch.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/util/TestChecksumUtility.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/BigdataZooDefs.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/ManageLogicalServiceTask.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniCoreServicesConfiguration.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/config/ZookeeperClientConfig.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/jini/start/process/JiniCoreServicesProcessHelper.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/AbstractZNodeConditionWatcher.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/DumpZookeeper.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/HierarchicalZNodeWatcher.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/UnknownChildrenWatcher.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/ZLockImpl.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/ZooKeeperAccessor.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/TestZLockImpl.java branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/RWStore.properties branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/build.properties branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/build.xml branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/RWStore.properties branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/build.properties branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/build.xml branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/ClosureStats.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/Justification.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/TruthMaintenance.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/DefaultExtensionFactory.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IExtensionFactory.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/LexiconConfiguration.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSD.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/CompareBOp.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/InlineEQ.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/MathBOp.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/load/SingleResourceReaderTask.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/magic/IRISUtils.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rio/LoadStats.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/rules/AbstractRuleDistinctTermScan.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/DefaultGraphSolutionExpander.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPORelation.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractLocalTripleStore.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DataLoader.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/vocab/Vocabulary.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/SampleExtensionFactory.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/TestEncodeDecodeKeys.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/metrics/TaskATest.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/metrics/TestMetrics.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestJustifications.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleFastClosure_3_5_6_7_9.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestSlice.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPORelation.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestNamedGraphs.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/stress/testSimpleLubm.xml branches/QUADS_QUERY_BRANCH/build.properties branches/QUADS_QUERY_BRANCH/build.xml Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata/src/architecture/RWStore.xls branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/striped/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/striped/StripedCounters.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HACommitGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAGlueBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAPipelineGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAReadGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumCommit.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumPipeline.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumRead.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumReadImpl.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumServiceBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/HAReceiveService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/HASendService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/HAWriteMessageBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/AllocationData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/ObjectSocketChannelStream.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/messages/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/BufferedWrite.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/IWriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/DeleteBlockCommitter.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/JournalDelegate.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RWAddressManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RootBlockCommitter.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RootBlockUtility.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ha/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ha/HAWriteMessage.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ha/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AbstractQuorumClient.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AbstractQuorumMember.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AsynchronousQuorumCloseException.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/Quorum.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumActor.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumClient.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumEvent.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumEventEnum.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumException.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumListener.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumMember.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumStateChangeListener.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumStateChangeListenerBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumWatcher.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/rule/Binding.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/rule/IBinding.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/DirectFixedAllocator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/IAllocationContext.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/PhysicalAddressResolutionException.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/StorageStats.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/DefaultAnalyzerFactory.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/search/IAnalyzerFactory.java branches/QUADS_QUERY_BRANCH/bigdata/src/samples/com/bigdata/samples/btree/JournalReadOnlyTxExample.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/counters/striped/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/counters/striped/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/counters/striped/TestStripedCounters.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/TestHASendAndReceive.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/TestHASendAndReceive3Nodes.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/TestCase3.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/messages/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestRWWriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestWORMWriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestWriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestWORMStrategyNoCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestWORMStrategyOneCacheBuffer.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/HABranch.txt branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestHAWORMStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestHAWritePipeline.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestJournalHA.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/force-vs-sync.txt branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/AbstractQuorumTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestMockQuorumFixture.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestSingletonQuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/ha/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/journal/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/journal/ha/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/journal/ha/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/OrderedSetDifference.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/QuorumPipelineState.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/QuorumServiceState.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/QuorumTokenState.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/UnorderedSetDifference.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorum.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ha/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ha/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ha/zk/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/AbstractZkQuorumTestCase.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/MockQuorumMember.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/MockServiceRegistrar.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestSetDifference.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestZkHA3QuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestZkQuorum.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestZkSingletonQuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/TestEphemeralSemantics.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/DateTimeExtension.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsInline.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsLiteral.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/DumpStore.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/Q14Test.java Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/striped/StripedCounters.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HACommitGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAGlueBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAPipelineGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/HAReadGlue.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumCommit.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumCommitImpl.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumPipeline.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumPipelineImpl.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumRead.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumReadImpl.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/QuorumServiceBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/HAReceiveService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/HASendService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/HAWriteMessageBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/ha/pipeline/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/IWriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/WriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/WriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/BufferedWrite.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/IWriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/io/writecache/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ha/HAWriteMessage.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/ha/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AbstractQuorum.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AbstractQuorumClient.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AbstractQuorumMember.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/AsynchronousQuorumCloseException.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/Quorum.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumActor.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumClient.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumEvent.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumEventEnum.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumException.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumListener.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumMember.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumStateChangeListener.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumStateChangeListenerBase.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/QuorumWatcher.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/quorum/package.html branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/BlobAllocator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/Config.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/DirectOutputStream.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/ICommitCallback.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/LockFile.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/PSInputStream.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/WriteBlock.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/service/IWritePipeline.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/counters/striped/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/counters/striped/TestStripedCounters.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/TestHASendAndReceive.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/ha/pipeline/TestHASendAndReceive3Nodes.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/TestWriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/TestWriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestRWWriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestWORMWriteCacheService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/writecache/TestWriteCache.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ReplicatedStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ReplicatedStoreService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/AbstractHAJournalTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/HABranch.txt branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestHAWORMStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestHAWritePipeline.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/TestJournalHA.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/ha/force-vs-sync.txt branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/AbstractQuorumTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/MockQuorumFixture.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestHA3QuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestMockQuorumFixture.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/TestSingletonQuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/ha/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/quorum/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/journal/ha/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/journal/ha/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/OrderedSetDifference.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/QuorumPipelineState.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/QuorumServiceState.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/QuorumTokenState.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/UnorderedSetDifference.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorum.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/quorum/zk/ZKQuorumImpl.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/zookeeper/ZNodeLockWatcher.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ha/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ha/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/journal/ha/zk/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/AbstractZkQuorumTestCase.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/MockQuorumMember.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/MockServiceRegistrar.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestSetDifference.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestZkHA3QuorumSemantics.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestZkQuorum.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/quorum/zk/TestZkSingletonQuorumSemantics.java Property Changed: ---------------- branches/QUADS_QUERY_BRANCH/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/com/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/com/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/QUADS_QUERY_BRANCH/bigdata-perf/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/lib/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/qualification/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/tools/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/logging/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-perf/btc/ branches/QUADS_QUERY_BRANCH/bigdata-perf/btc/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/LEGAL/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/lib/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/config/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/scripts/ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/relation/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/relation/rule/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/bigdata/rdf/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/relation/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/relation/rule/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/changesets/ branches/QUADS_QUERY_BRANCH/dsi-utils/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/compression/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/io/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/util/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/io/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/util/ branches/QUADS_QUERY_BRANCH/osgi/ branches/QUADS_QUERY_BRANCH/src/resources/bin/config/ branches/QUADS_QUERY_BRANCH/src/resources/config/ Property changes on: branches/QUADS_QUERY_BRANCH ___________________________________________________________________ Modified: svn:mergeinfo - /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/LEXICON_REFACTOR_BRANCH:2633-3304 /branches/bugfix-btm:2594-3237 /branches/dev-btm:2574-2730 /branches/fko:3150-3194 /trunk:3659-4061 + /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/JOURNAL_HA_BRANCH:2596-4066 /branches/LEXICON_REFACTOR_BRANCH:2633-3304 /branches/bugfix-btm:2594-3237 /branches/dev-btm:2574-2730 /branches/fko:3150-3194 /trunk:3392-3437,3656-4061 Modified: branches/QUADS_QUERY_BRANCH/.project =================================================================== --- branches/QUADS_QUERY_BRANCH/.project 2011-01-09 15:38:34 UTC (rev 4068) +++ branches/QUADS_QUERY_BRANCH/.project 2011-01-09 20:58:02 UTC (rev 4069) @@ -1,6 +1,6 @@ <?xml version="1.0" encoding="UTF-8"?> <projectDescription> - <name>bigdata</name> + <name>bigdata-quads-clean-for-merge</name> <comment></comment> <projects> </projects> Copied: branches/QUADS_QUERY_BRANCH/bigdata/src/architecture/RWStore.xls (from rev 4066, branches/JOURNAL_HA_BRANCH/bigdata/src/architecture/RWStore.xls) =================================================================== (Binary files differ) Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/architecture/mergePriority.xls =================================================================== (Binary files differ) Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-01-09 15:38:34 UTC (rev 4068) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-01-09 20:58:02 UTC (rev 4069) @@ -3350,6 +3350,8 @@ * @todo Actually, I think that this is just a fence post in ringbuffer * beforeOffer() method and the code might work without the synchronized * block if the fence post was fixed. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/201 */ synchronized (this) { @@ -3682,6 +3684,7 @@ // write the serialized node or leaf onto the store. final long addr; + final long oldAddr; { final long begin = System.nanoTime(); @@ -3691,7 +3694,9 @@ // now we have a new address, delete previous identity if any if (node.isPersistent()) { - store.delete(node.getIdentity()); + oldAddr = node.getIdentity(); + } else { + oldAddr = 0; } btreeCounters.writeNanos += System.nanoTime() - begin; @@ -3708,6 +3713,13 @@ */ node.setIdentity(addr); + if (oldAddr != 0L) { + if (storeCache!=null) { + // remove from cache. + storeCache.remove(oldAddr); + } + store.delete(oldAddr); + } node.setDirty(false); @@ -3821,9 +3833,10 @@ assert tmp.position() == 0; - assert tmp.limit() == store.getByteCount(addr) : "limit=" - + tmp.limit() + ", byteCount(addr)=" - + store.getByteCount(addr)+", addr="+store.toString(addr); + // Note: This assertion is invalidated when checksums are inlined in the store records. +// assert tmp.limit() == store.getByteCount(addr) : "limit=" +// + tmp.limit() + ", byteCount(addr)=" +// + store.getByteCount(addr)+", addr="+store.toString(addr); btreeCounters.readNanos.addAndGet( System.nanoTime() - begin ); Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2011-01-09 15:38:34 UTC (rev 4068) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2011-01-09 20:58:02 UTC (rev 4069) @@ -39,6 +39,7 @@ import com.bigdata.journal.ICommitter; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.Name2Addr; +import com.bigdata.journal.RWStrategy; import com.bigdata.journal.Name2Addr.Entry; import com.bigdata.mdi.IResourceMetadata; import com.bigdata.mdi.JournalMetadata; @@ -1173,8 +1174,21 @@ assertNotReadOnly(); - if (getIndexMetadata().getDeleteMarkers()) { - + /* + * FIXME Per https://sourceforge.net/apps/trac/bigdata/ticket/221, we + * should special case this for the RWStore when delete markers are not + * enabled and just issue deletes against each node and leave in the + * BTree. This could be done using a post-order traversal of the nodes + * and leaves such that the parent is not removed from the store until + * its children have been removed. The deletes should be low-level + * IRawStore#delete(addr) invocations without maintenance to the B+Tree + * data structures. Afterwards replaceRootWithEmptyLeaf() should be + * invoked to discard the hard reference ring buffer and associate a new + * root leaf with the B+Tree. + */ + if (getIndexMetadata().getDeleteMarkers() + || getStore() instanceof RWStrategy) { + /* * Write deletion markers for each non-deleted entry. When the * transaction commits, those delete markers will have to validate Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BytesUtil.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BytesUtil.java 2011-01-09 15:38:34 UTC (rev 4068) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BytesUtil.java 2011-01-09 20:58:02 UTC (rev 4069) @@ -27,6 +27,7 @@ import it.unimi.dsi.io.InputBitStream; import it.unimi.dsi.io.OutputBitStream; +import java.nio.ByteBuffer; import java.util.Comparator; import java.util.regex.Matcher; import java.util.regex.Pattern; @@ -1019,4 +1020,81 @@ static final private Pattern PATTERN_BYTE_COUNT = Pattern.compile( "([0-9]+)(k|kb|m|mb|g|gb)?", Pattern.CASE_INSENSITIVE); + /** + * Return a byte[] having the data in the {@link ByteBuffer} from the + * {@link ByteBuffer#position()} to the {@link ByteBuffer#limit()}. The + * position, limit, and mark are not affected by this operation. When the + * {@link ByteBuffer} has a backing array, the array offset is ZERO (0), and + * the {@link ByteBuffer#limit()} is equal to the + * {@link ByteBuffer#capacity()} then the backing array is returned. + * Otherwise, a new byte[] is allocated and the data are copied into that + * byte[], which is then returned. + * + * @param b + * The {@link ByteBuffer}. + * + * @return The byte[]. + */ + static public byte[] toArray(final ByteBuffer b) { + + return toArray(b, false/* forceCopy */); + + } + + /** + * Return a byte[] having the data in the {@link ByteBuffer} from the + * {@link ByteBuffer#position()} to the {@link ByteBuffer#limit()}. The + * position, limit, and mark are not affected by this operation. + * <p> + * Under certain circumstances it is possible and may be desirable to return + * the backing {@link ByteBuffer#array}. This behavior is enabled by + * <code>forceCopy := false</code>. + * <p> + * It is possible to return the backing byte[] when the {@link ByteBuffer} + * has a backing array, the array offset is ZERO (0), and the + * {@link ByteBuffer#limit()} is equal to the {@link ByteBuffer#capacity()} + * then the backing array is returned. Otherwise, a new byte[] must be + * allocated, and the data are copied into that byte[], which may then be + * returned. + * + * @param b + * The {@link ByteBuffer}. + * @param forceCopy + * When <code>false</code>, the backing array will be returned if + * possible. + * + * @return The byte[]. + */ + static public byte[] toArray(final ByteBuffer b, final boolean forceCopy) { + + if (b.hasArray() && b.arrayOffset() == 0 && b.position() == 0) { + +// && b.limit() == b.capacity() + + final byte[] a = b.array(); + + if (a.length == b.limit()) { + + return a; + + } + + } + + /* + * Copy the data into a byte[] using a read-only view on the buffer so + * that we do not mess with its position, mark, or limit. + */ + final ByteBuffer tmp = b.asReadOnlyBuffer(); + + final int len = tmp.remaining(); + + final byte[] a = new byte[len]; + + tmp.get(a); + + return a; + + } + } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexMetadata.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexMetadata.java 2011-01-09 15:38:34 UTC (rev 4068) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexMetadata.java 2011-01-09 20:58:02 UTC (rev 4069) @@ -271,7 +271,7 @@ /** * A reasonable maximum branching factor for a {@link BTree}. */ - int MAX_BTREE_BRANCHING_FACTOR = 1024; + int MAX_BTREE_BRANCHING_FACTOR = 4196; /** * A reasonable maximum branching factor for an {@link IndexSegment}. Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java 2011-01-09 15:38:34 UTC (rev 4068) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java 2011-01-09 20:58:02 UTC (rev 4069) @@ -59,7 +59,7 @@ import com.bigdata.io.FileChannelUtility; import com.bigdata.io.NOPReopener; import com.bigdata.io.SerializerUtil; -import com.bigdata.io.WriteCache; +import com.bigdata.io.writecache.WriteCache; import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr; import com.bigdata.journal.TemporaryRawStore; @@ -71,6 +71,7 @@ import com.bigdata.rawstore.IBlock; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.WormAddressManager; +import com.bigdata.util.ChecksumUtility; /** * Builds an {@link IndexSegment} given a source btree and a target branching @@ -374,6 +375,32 @@ * The bloom filter iff we build one (errorRate != 0.0). */ final IBloomFilter bloomFilter; + + /** + * When <code>true</code> record level checksums will be used in the + * generated file. + * + * FIXME This can not be enabled until we factor out the direct use of the + * {@link WriteCache} since special handling is otherwise required to ensure + * that the checksum makes it into the output record when we write directly + * on the disk. + * + * FIXME When enabling this, make sure that the bloom filter, + * {@link IndexMetadata}, and the blobs are all checksummed and make sure + * that the {@link IndexSegmentStore} verifies the checksums when it reads + * through to the disk and only returns the raw record w/o the trailing + * checksum. + * + * FIXME The right time to reconcile these things may be when this branch + * (HAJournal) is merged with the dynamic shard refactor branch. + */ + final private boolean useChecksums = false; + + /** + * Used to compute record level checksums when {@link #useChecksums} is + * <code>true</code>. + */ + final private ChecksumUtility checker = new ChecksumUtility(); /** * The file on which the {@link IndexSegment} is written. The file is closed @@ -1183,7 +1210,10 @@ throw new IllegalArgumentException(); final long begin_setup = System.... [truncated message content] |
From: <mrp...@us...> - 2011-01-13 17:56:49
|
Revision: 4086 http://bigdata.svn.sourceforge.net/bigdata/?rev=4086&view=rev Author: mrpersonick Date: 2011-01-13 17:56:43 +0000 (Thu, 13 Jan 2011) Log Message: ----------- trying to get bsbm to qualify Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/TestBSBM.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepository.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp2BOpUtility.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/TestBSBM.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/TestBSBM.java 2011-01-13 17:55:15 UTC (rev 4085) +++ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/TestBSBM.java 2011-01-13 17:56:43 UTC (rev 4086) @@ -1,63 +1,143 @@ package benchmark.bigdata; -import com.bigdata.rdf.sail.bench.NanoSparqlClient; +import java.io.FileInputStream; +import java.util.Properties; +import org.apache.log4j.Logger; +import org.openrdf.model.Statement; +import org.openrdf.model.impl.URIImpl; +import org.openrdf.model.vocabulary.RDF; +import org.openrdf.query.QueryLanguage; +import org.openrdf.query.TupleQueryResult; +import org.openrdf.query.algebra.QueryRoot; +import org.openrdf.query.algebra.TupleExpr; +import org.openrdf.query.algebra.UnaryTupleOperator; +import org.openrdf.repository.RepositoryResult; +import org.openrdf.repository.sail.SailTupleQuery; + +import com.bigdata.rdf.sail.BigdataSail; +import com.bigdata.rdf.sail.BigdataSailRepository; +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.BigdataSailTupleQuery; +import com.bigdata.rdf.sail.sop.SOpTree; +import com.bigdata.rdf.sail.sop.SOpTreeBuilder; +import com.bigdata.rdf.store.AbstractTripleStore; + public class TestBSBM { - private static final String serviceURL = "http://localhost:8080"; + private static final Logger log = Logger.getLogger(TestBSBM.class); + + private static final String PROPS = + "/Users/mikepersonick/Documents/workspace/bigdata-query-branch/bigdata-perf/bsbm/RWStore.properties"; + + private static final String JNL = + "/Users/mikepersonick/Documents/systap/bsbm/bsbm-qual/bigdata-bsbm.RW.jnl"; - private static final String queryStr = -/* - "PREFIX bsbm-inst: <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/> " + - "PREFIX bsbm: <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/vocabulary/> " + - "PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> " + - "PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> " + - "SELECT DISTINCT ?product " + - "WHERE { " + - " { " + - " ?product rdfs:label ?label . " + - " ?product rdf:type <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductType75> . " + - " ?product bsbm:productFeature <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductFeature379> . " + - " ?product bsbm:productFeature <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductFeature2248> . " + - " ?product bsbm:productPropertyTextual1 ?propertyTextual . " + - " ?product bsbm:productPropertyNumeric1 ?p1 . " + - " FILTER ( ?p1 > 160 ) " + - " } UNION { " + - " ?product rdfs:label ?label . " + - " ?product rdf:type <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductType75> . " + - " ?product bsbm:productFeature <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductFeature379> . " + - " ?product bsbm:productFeature <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductFeature2251> . " + - " ?product bsbm:productPropertyTextual1 ?propertyTextual . " + - " ?product bsbm:productPropertyNumeric2 ?p2 . " + - " FILTER ( ?p2 > 461 ) " + - " } " + -// " FILTER ( ?p1 > 160 || ?p2> 461 ) " + - "} " -// "ORDER BY ?label " + -// "OFFSET 5 " + -// "LIMIT 10" - ; -*/ - "construct { <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/dataFromProducer24/Product1131> ?p ?o . } " + - "where { " + - " <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/dataFromProducer24/Product1131> ?p ?o . " + - "}"; + private static final String NS = "qual"; + private static final String QUERY = + + "PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> " + + " PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> " + + " PREFIX bsbm: <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/vocabulary/> " + + " SELECT DISTINCT ?product ?productLabel " + + " WHERE { " + + " ?product rdfs:label ?productLabel . " + + " FILTER (<http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/dataFromProducer36/Product1625> != ?product) " + + " <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/dataFromProducer36/Product1625> bsbm:productFeature ?prodFeature . " + + " ?product bsbm:productFeature ?prodFeature . " + + " <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/dataFromProducer36/Product1625> bsbm:productPropertyNumeric1 ?origProperty1 . " + + " ?product bsbm:productPropertyNumeric1 ?simProperty1 . " + + " FILTER (?simProperty1 < (?origProperty1 + 120) && ?simProperty1 > (?origProperty1 - 120)) " + + " <http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/dataFromProducer36/Product1625> bsbm:productPropertyNumeric2 ?origProperty2 . " + + " ?product bsbm:productPropertyNumeric2 ?simProperty2 . " + + " FILTER (?simProperty2 < (?origProperty2 + 170) && ?simProperty2 > (?origProperty2 - 170)) " + + " } " + + " ORDER BY ?productLabel " + + " LIMIT 5"; + + /** * @param args */ - public static void main(String[] args) { + public static void main(String[] args) throws Exception { try { - args = new String[] { - "-query", - queryStr, - serviceURL - }; +// args = new String[] { +// "-query", +// queryStr, +// serviceURL +// }; +// +// NanoSparqlClient.main(args); - NanoSparqlClient.main(args); - + + final Properties props = new Properties(); + final FileInputStream fis = new FileInputStream(PROPS); + props.load(fis); + fis.close(); + props.setProperty(AbstractTripleStore.Options.FILE, JNL); + props.setProperty(BigdataSail.Options.NAMESPACE, NS); + + System.err.println(props); + + final BigdataSail sail = new BigdataSail(props); + sail.initialize(); + final BigdataSailRepository repo = new BigdataSailRepository(sail); + final BigdataSailRepositoryConnection cxn = + repo.getReadOnlyConnection(); + + try { + +// final RepositoryResult<Statement> stmts = cxn.getStatements( +// null, RDF.TYPE, +// new URIImpl("http://www4.wiwiss.fu-berlin.de/bizer/bsbm/v01/instances/ProductType138"), false); +// if (stmts.hasNext()) { +// while (stmts.hasNext()) { +// System.err.println(stmts.next()); +// } +// } else { +// System.err.println("no stmts"); +// } + + final SailTupleQuery tupleQuery = (SailTupleQuery) + cxn.prepareTupleQuery(QueryLanguage.SPARQL, QUERY); + tupleQuery.setIncludeInferred(false /* includeInferred */); + + if (log.isInfoEnabled()) { + + final BigdataSailTupleQuery bdTupleQuery = + (BigdataSailTupleQuery) tupleQuery; + final QueryRoot root = (QueryRoot) bdTupleQuery.getTupleExpr(); + log.info(root); + TupleExpr te = root; + while (te instanceof UnaryTupleOperator) { + te = ((UnaryTupleOperator) te).getArg(); + } + + final SOpTreeBuilder stb = new SOpTreeBuilder(); + final SOpTree tree = stb.collectSOps(te); + + log.info(tree); + log.info(QUERY); + + final TupleQueryResult result = tupleQuery.evaluate(); + if (result.hasNext()) { + while (result.hasNext()) { + log.info(result.next()); + } + } else { + log.info("no results for query"); + } + + } + + } finally { + cxn.close(); + repo.shutDown(); + } + } catch (Exception ex) { ex.printStackTrace(); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-01-13 17:55:15 UTC (rev 4085) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-01-13 17:56:43 UTC (rev 4086) @@ -272,7 +272,8 @@ */ public BigdataEvaluationStrategyImpl3( final BigdataTripleSource tripleSource, final Dataset dataset, - final boolean nativeJoins) { + final boolean nativeJoins, + final boolean allowSesameQueryEvaluation) { super(tripleSource, dataset); @@ -280,6 +281,7 @@ this.dataset = dataset; this.database = tripleSource.getDatabase(); this.nativeJoins = nativeJoins; + this.allowSesameQueryEvaluation = allowSesameQueryEvaluation; } @@ -289,6 +291,12 @@ private boolean nativeJoins; /** + * If true, allow queries that cannot be executed natively to be executed + * by Sesame. + */ + private final boolean allowSesameQueryEvaluation; + + /** * A set of properties that act as query hints during evaluation. */ private Properties queryHints; @@ -364,18 +372,30 @@ } catch (UnsupportedOperatorException ex) { - // Use Sesame 2 evaluation - - log.warn("could not evaluate natively, using Sesame evaluation"); - - if (log.isInfoEnabled()) { - log.info(ex.getOperator()); - } - - nativeJoins = false; - - return super.evaluate(union, bs); - + if (allowSesameQueryEvaluation) { + + // Use Sesame 2 evaluation + + log.warn("could not evaluate natively, using Sesame evaluation"); + + if (log.isInfoEnabled()) { + log.info(ex.getOperator()); + } + + // turn off native joins for the remainder, we can't do + // partial execution + nativeJoins = false; + + // defer to Sesame + return super.evaluate(union, bs); + + } else { + + // allow the query to fail + throw ex; + + } + } } @@ -409,18 +429,30 @@ } catch (UnsupportedOperatorException ex) { - // Use Sesame 2 evaluation + if (allowSesameQueryEvaluation) { + + // Use Sesame 2 evaluation + + log.warn("could not evaluate natively, using Sesame evaluation"); + + if (log.isInfoEnabled()) { + log.info(ex.getOperator()); + } + + // turn off native joins for the remainder, we can't do + // partial execution + nativeJoins = false; + + // defer to Sesame + return super.evaluate(join, bs); + + } else { + + // allow the query to fail + throw ex; + + } - log.warn("could not evaluate natively, using Sesame evaluation"); - - if (log.isInfoEnabled()) { - log.info(ex.getOperator()); - } - - nativeJoins = false; - - return super.evaluate(join, bs); - } } @@ -454,18 +486,30 @@ } catch (UnsupportedOperatorException ex) { - // Use Sesame 2 evaluation + if (allowSesameQueryEvaluation) { + + // Use Sesame 2 evaluation + + log.warn("could not evaluate natively, using Sesame evaluation"); + + if (log.isInfoEnabled()) { + log.info(ex.getOperator()); + } + + // turn off native joins for the remainder, we can't do + // partial execution + nativeJoins = false; + + // defer to Sesame + return super.evaluate(leftJoin, bs); + + } else { + + // allow the query to fail + throw ex; + + } - log.warn("could not evaluate natively, using Sesame evaluation"); - - if (log.isInfoEnabled()) { - log.info(ex.getOperator()); - } - - nativeJoins = false; - - return super.evaluate(leftJoin, bs); - } } @@ -476,6 +520,9 @@ try { return _evaluateNatively(tupleExpr, bs); } catch (UnrecognizedValueException ex) { + if (log.isInfoEnabled()) { + log.info("unrecognized value in query: " + ex.getValue()); + } return new EmptyIteration<BindingSet, QueryEvaluationException>(); } catch (QueryEvaluationException ex) { throw ex; @@ -652,6 +699,10 @@ try { + if (log.isDebugEnabled()) { + log.debug("running native query: " + BOpUtility.toString(query)); + } + final IRunningQuery runningQuery = queryEngine.eval(query); final IAsynchronousIterator<IBindingSet[]> it1 = Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-01-13 17:55:15 UTC (rev 4085) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-01-13 17:56:43 UTC (rev 4086) @@ -384,6 +384,20 @@ public static final String DEFAULT_NEW_EVAL_STRATEGY = "false"; + /** + * Option as to whether or not to allow Sesame evaluation of queries + * that cannot be run natively (if false, these queries will throw + * exceptions. If true, these queries will run heinously slow but they + * will run. (default + * {@value #DEFAULT_NEW_EVAL_STRATEGY}). + */ + public static final String ALLOW_SESAME_QUERY_EVALUATION = BigdataSail.class.getPackage() + .getName()+ ".allowSesameQueryEvaluation"; + + public static final String DEFAULT_ALLOW_SESAME_QUERY_EVALUATION = "false"; + + + } /** @@ -531,6 +545,14 @@ final private boolean starJoins; /** + * When true, allow queries that cannot be executed natively to be + * executed by Sesame. + * + * @See {@link Options#ALLOW_SESAME_QUERY_EVALUATION} + */ + final private boolean allowSesameQueryEvaluation; + + /** * <code>true</code> iff the {@link BigdataSail} has been * {@link #initialize()}d and not {@link #shutDown()}. */ @@ -955,6 +977,19 @@ } + // Sesame query evaluation + { + + allowSesameQueryEvaluation = Boolean.parseBoolean(properties.getProperty( + BigdataSail.Options.ALLOW_SESAME_QUERY_EVALUATION, + BigdataSail.Options.DEFAULT_ALLOW_SESAME_QUERY_EVALUATION)); + + if (log.isInfoEnabled()) + log.info(BigdataSail.Options.ALLOW_SESAME_QUERY_EVALUATION + "=" + + allowSesameQueryEvaluation); + + } + namespaces = Collections.synchronizedMap(new LinkedHashMap<String, String>()); @@ -3265,7 +3300,8 @@ if (newEvalStrategy) { strategy = new BigdataEvaluationStrategyImpl3( - tripleSource, dataset, nativeJoins + tripleSource, dataset, nativeJoins, + allowSesameQueryEvaluation ); } else { strategy = new BigdataEvaluationStrategyImpl( @@ -3360,7 +3396,8 @@ if (newEvalStrategy) { strategy = new BigdataEvaluationStrategyImpl3( - tripleSource, dataset, nativeJoins + tripleSource, dataset, nativeJoins, + allowSesameQueryEvaluation ); } else { strategy = new BigdataEvaluationStrategyImpl( Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepository.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepository.java 2011-01-13 17:55:15 UTC (rev 4085) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepository.java 2011-01-13 17:56:43 UTC (rev 4086) @@ -57,7 +57,7 @@ * * @return a read-only connection to the database */ - public SailRepositoryConnection getReadOnlyConnection() + public BigdataSailRepositoryConnection getReadOnlyConnection() throws RepositoryException { return new BigdataSailRepositoryConnection(this, @@ -71,7 +71,7 @@ * * @return a read-only connection to the database */ - public SailRepositoryConnection getReadOnlyConnection(long timestamp) + public BigdataSailRepositoryConnection getReadOnlyConnection(long timestamp) throws RepositoryException { return new BigdataSailRepositoryConnection(this, @@ -79,7 +79,7 @@ } - public SailRepositoryConnection getReadWriteConnection() + public BigdataSailRepositoryConnection getReadWriteConnection() throws RepositoryException { try { @@ -95,7 +95,7 @@ } - public SailRepositoryConnection getUnisolatedConnection() + public BigdataSailRepositoryConnection getUnisolatedConnection() throws RepositoryException { try { Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp.java 2011-01-13 17:55:15 UTC (rev 4085) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp.java 2011-01-13 17:56:43 UTC (rev 4086) @@ -86,7 +86,8 @@ private String toString(final Var v) { if (v.hasValue()) { final String s = v.getValue().stringValue(); - return s.substring(s.indexOf('#')); + final int i = s.indexOf('#'); + return i >= 0 ? s.substring(s.indexOf('#')) : s; } else { return "?"+v.getName(); } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp2BOpUtility.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp2BOpUtility.java 2011-01-13 17:55:15 UTC (rev 4085) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/sop/SOp2BOpUtility.java 2011-01-13 17:56:43 UTC (rev 4086) @@ -214,7 +214,7 @@ int i = 0; for (SOpGroup child : children) { // convert the child IStep - args[i] = convert(child, idFactory, db, queryEngine, queryHints); + args[i++] = convert(child, idFactory, db, queryEngine, queryHints); } final LinkedList<NV> anns = new LinkedList<NV>(); @@ -225,6 +225,7 @@ final Union thisOp = new Union(args, NV .asMap(anns.toArray(new NV[anns.size()]))); + return thisOp; } @@ -283,7 +284,7 @@ final SOp sop = child.getSingletonSOp(); final BOp bop = sop.getBOp(); final IPredicate pred = (IPredicate) bop.setProperty( - IPredicate.Annotations.OPTIONAL, "true"); + IPredicate.Annotations.OPTIONAL, Boolean.TRUE); preds.add(pred); } } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-01-13 20:25:48
|
Revision: 4087 http://bigdata.svn.sourceforge.net/bigdata/?rev=4087&view=rev Author: thompsonbry Date: 2011-01-13 20:25:42 +0000 (Thu, 13 Jan 2011) Log Message: ----------- zookeeper CI ant targets added Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/build.properties branches/QUADS_QUERY_BRANCH/build.xml Modified: branches/QUADS_QUERY_BRANCH/build.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/build.properties 2011-01-13 17:56:43 UTC (rev 4086) +++ branches/QUADS_QUERY_BRANCH/build.properties 2011-01-13 20:25:42 UTC (rev 4087) @@ -376,3 +376,7 @@ # on a volume with a lot of room. The directory may be destroyed (by the test harness) # after the performance tests have run their course. perf.run.dir=/usr/bigdata/runs + +# CI properties. +test.zookeeper.installDir=/Users/bryan/zookeeper-3.2.1 + Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-01-13 17:56:43 UTC (rev 4086) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-01-13 20:25:42 UTC (rev 4087) @@ -115,7 +115,7 @@ </target> <!-- Builds the bigdata JAR and bundles it together with all of its dependencies in the ${build.dir}/lib directory. --> - <target name="bundleJar" depends="bundle, jar" description="Builds the bigdata JAR and bundles it together with all of its dependencies in the ${build.dir}/lib directory."> + <target name="bundleJar" depends="clean, bundle, jar" description="Builds the bigdata JAR and bundles it together with all of its dependencies in the ${build.dir}/lib directory."> <copy file="${build.dir}/${version}.jar" todir="${build.dir}/lib" /> <!--<property name="myclasspath" refid="runtime.classpath" /> <echo message="${myclasspath}"/>--> @@ -1599,6 +1599,18 @@ <property name="test.codebase.dir" value="${dist.lib.dl}" /> <property name="test.codebase" value="http://${this.hostname}:${test.codebase.port}/jsk-dl.jar" /> + <!-- These zookeeper configuration properties used to inform the test --> + <!-- suite about the zookeeper instance which will be used by the --> + <!-- tests. These properties MUST be consistent with the actual --> + <!-- zookeeper configuration. Zookeeper is assumed (by the tests) to --> + <!-- be running on the localhost. --> + <property name="test.zookeeper.tickTime" value="2000" /> + <property name="test.zookeeper.clientPort" value="2888" /> + <property name="test.zookeeper.leaderPort" value="3888" /> + + <!-- The zookeeper install directory. --> + <property name="test.zookeeper.installDir" value="${zookeeper.installDir}" /> + <property name="java.security.policy" value="${dist.var.config.policy}/policy.all" /> <property name="log4j.configuration" value="${bigdata.test.log4j.rel}" /> <property name="java.net.preferIPv4Stack" value="true" /> @@ -1608,6 +1620,7 @@ <copy file="${build.properties.from.file}" todir="${build.properties.test.to.path}" /> + <antcall target="startZookeeper" /> <antcall target="startHttpd" /> <antcall target="startLookup" /> @@ -1616,6 +1629,7 @@ <antcall target="stopLookup" /> <antcall target="stopHttpd" /> + <antcall target="stopZookeeper" /> <!-- This message is noticed by the hudson build and is used to trigger after various after actions. --> @@ -1690,6 +1704,22 @@ </java> </target> + <target name="startZookeeper"> + <echo message="test.zookeeper.installDir=${test.zookeeper.installDir}"/> + <echo>bin/zkServer.sh start +</echo> + <exec command="bin/zkServer.sh start" dir="${test.zookeeper.installDir}" logerror="true"> +</exec> + </target> + + <target name="stopZookeeper"> + <echo message="test.zookeeper.installDir=${test.zookeeper.installDir}"/> + <echo>bin/zkServer.sh start +</echo> + <exec command="bin/zkServer.sh stop" dir="${test.zookeeper.installDir}" logerror="true"> +</exec> + </target> + <!-- runs all junit tests --> <target name="run-junit"> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-01-18 17:37:07
|
Revision: 4122 http://bigdata.svn.sourceforge.net/bigdata/?rev=4122&view=rev Author: thompsonbry Date: 2011-01-18 17:36:58 +0000 (Tue, 18 Jan 2011) Log Message: ----------- Working on removing zookeeper start/stop from the individual unit tests. It will now be started/stopped once per CI run by the top-level ant build. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/testfed.config branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/service/jini/util/JiniServicesHelper.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java branches/QUADS_QUERY_BRANCH/build.properties branches/QUADS_QUERY_BRANCH/build.xml Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java 2011-01-18 15:30:00 UTC (rev 4121) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java 2011-01-18 17:36:58 UTC (rev 4122) @@ -28,6 +28,7 @@ package com.bigdata.jini.start; import java.io.File; +import java.net.InetAddress; import java.util.List; import java.util.UUID; @@ -35,18 +36,17 @@ import net.jini.config.Configuration; import net.jini.config.ConfigurationProvider; -import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooKeeper; -import org.apache.zookeeper.KeeperException.NodeExistsException; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.data.ACL; import com.bigdata.jini.start.config.ZookeeperClientConfig; import com.bigdata.jini.start.process.ProcessHelper; -import com.bigdata.jini.start.process.ZookeeperProcessHelper; import com.bigdata.resources.ResourceFileFilter; import com.bigdata.service.jini.JiniClient; import com.bigdata.service.jini.JiniFederation; +import com.bigdata.util.config.NicUtil; +import com.bigdata.zookeeper.ZooHelper; /** * Abstract base class for unit tests requiring a running zookeeper and a @@ -122,9 +122,24 @@ config = ConfigurationProvider.getInstance(args); - // if necessary, start zookeeper (a server instance). - ZookeeperProcessHelper.startZookeeper(config, listener); +// // if necessary, start zookeeper (a server instance). +// ZookeeperProcessHelper.startZookeeper(config, listener); + final int clientPort = Integer.valueOf(System + .getProperty("test.zookeeper.clientPort","2181")); + + // Verify zookeeper is running on the local host at the client port. + { + final InetAddress localIpAddr = NicUtil.getInetAddress(null, 0, + null, true); + try { + ZooHelper.ruok(localIpAddr, clientPort); + } catch (Throwable t) { + fail("Zookeeper not running:: " + localIpAddr + ":" + + clientPort, t); + } + } + /* * FIXME We need to start a jini lookup service for groups = {fedname} * for this test to succeed. Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/testfed.config =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/testfed.config 2011-01-18 15:30:00 UTC (rev 4121) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/testfed.config 2011-01-18 17:36:58 UTC (rev 4122) @@ -39,11 +39,10 @@ /** * Zookeeper server configuration. - */ + * org.apache.zookeeper.server.quorum.QuorumPeerMain { - /* Directory for zookeeper's persistent state. - */ + // Directory for zookeeper's persistent state. dataDir = new File(bigdata.fedname+"/zookeeper"); // required. @@ -60,19 +59,14 @@ //classpath = new String[] { "${zookeeper.jar}", "${log4j.jar}" }; - /* Optional command line arguments for the JVM used to execute - * zookeeper. - * - * Note: swapping for zookeeper is especially bad since the - * operations are serialized, so if anything hits then disk then - * all operations in the queue will have that latency as well. - */ + // Optional command line arguments for the JVM used to execute //args=new String[]{"-Xmx2G"}; // zookeeper server logging configuration (value is a URI!) log4j = bigdata.log4j; } +*/ /* * Service configuration defaults. These can also be specified on a @@ -127,15 +121,14 @@ */ org.apache.zookeeper.ZooKeeper { - /* Root znode for the federation instance. */ + // Root znode for the federation instance. zroot = "/test/"+bigdata.fedname; - /* A comma separated list of host:port pairs, where the port is - * the CLIENT port for the zookeeper server instance. - */ + // A comma separated list of host:port pairs, where the port is + // the CLIENT port for the zookeeper server instance. servers="localhost:2181"; - /* Session timeout (optional). */ + // Session timeout (optional). //sessionTimeout=xxxx; } Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/service/jini/util/JiniServicesHelper.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/service/jini/util/JiniServicesHelper.java 2011-01-18 15:30:00 UTC (rev 4121) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/service/jini/util/JiniServicesHelper.java 2011-01-18 17:36:58 UTC (rev 4122) @@ -10,6 +10,7 @@ import java.io.InputStream; import java.io.OutputStream; import java.net.BindException; +import java.net.InetAddress; import java.net.ServerSocket; import java.util.Arrays; import java.util.Queue; @@ -32,7 +33,6 @@ import com.bigdata.jini.start.config.ZookeeperClientConfig; import com.bigdata.jini.start.config.ZookeeperServerConfiguration; import com.bigdata.jini.start.process.ProcessHelper; -import com.bigdata.jini.start.process.ZookeeperProcessHelper; import com.bigdata.jini.util.ConfigMath; import com.bigdata.jini.util.JiniUtil; import com.bigdata.resources.ResourceFileFilter; @@ -47,6 +47,7 @@ import com.bigdata.service.jini.MetadataServer; import com.bigdata.service.jini.TransactionServer; import com.bigdata.util.concurrent.DaemonThreadFactory; +import com.bigdata.util.config.NicUtil; import com.bigdata.zookeeper.ZooHelper; /** @@ -267,10 +268,10 @@ */ private int clientPort; - /** - * The directory in which zookeeper is running. - */ - private File zooDataDir; +// /** +// * The directory in which zookeeper is running. +// */ +// private File zooDataDir; /** * Starts all services and connects the {@link JiniClient} to the @@ -374,43 +375,46 @@ final String[] options; { - // the zookeeper service directory. - zooDataDir = new File(fedServiceDir, "zookeeper"); - - if(zooDataDir.exists()) { - - // clear out old zookeeper state first. - recursiveDelete(zooDataDir); - - } - - // create. - zooDataDir.mkdirs(); +// // the zookeeper service directory. +// zooDataDir = new File(fedServiceDir, "zookeeper"); +// +// if(zooDataDir.exists()) { +// +// // clear out old zookeeper state first. +// recursiveDelete(zooDataDir); +// +// } +// +// // create. +// zooDataDir.mkdirs(); - try { +// try { - // find ports that are not in use. - clientPort = getPort(2181/* suggestedPort */); - final int peerPort = getPort(2888/* suggestedPort */); - final int leaderPort = getPort(3888/* suggestedPort */); - final String servers = "1=localhost:" + peerPort + ":" - + leaderPort; +// // find ports that are not in use. +// clientPort = getPort(2181/* suggestedPort */); +// final int peerPort = getPort(2888/* suggestedPort */); +// final int leaderPort = getPort(3888/* suggestedPort */); +// final String servers = "1=localhost:" + peerPort + ":" +// + leaderPort; + clientPort = Integer.valueOf(System + .getProperty("test.zookeeper.clientPort","2181")); + options = new String[] { // overrides the clientPort to be unique. QuorumPeerMain.class.getName() + "." + ZookeeperServerConfiguration.Options.CLIENT_PORT + "=" + clientPort, - // overrides servers declaration. - QuorumPeerMain.class.getName() + "." - + ZookeeperServerConfiguration.Options.SERVERS - + "=\"" + servers + "\"", - // overrides the dataDir - QuorumPeerMain.class.getName() + "." - + ZookeeperServerConfiguration.Options.DATA_DIR - + "=new java.io.File(" - + ConfigMath.q(zooDataDir.toString()) + ")"// +// // overrides servers declaration. +// QuorumPeerMain.class.getName() + "." +// + ZookeeperServerConfiguration.Options.SERVERS +// + "=\"" + servers + "\"", +// // overrides the dataDir +// QuorumPeerMain.class.getName() + "." +// + ZookeeperServerConfiguration.Options.DATA_DIR +// + "=new java.io.File(" +// + ConfigMath.q(zooDataDir.toString()) + ")"// }; System.err.println("options=" + Arrays.toString(options)); @@ -418,27 +422,39 @@ final Configuration config = ConfigurationProvider .getInstance(concat(args, options)); - // start zookeeper (a server instance). - final int nstarted = ZookeeperProcessHelper.startZookeeper( - config, serviceListener); +// // start zookeeper (a server instance). +// final int nstarted = ZookeeperProcessHelper.startZookeeper( +// config, serviceListener); +// +// if (nstarted != 1) { +// +// throw new RuntimeException( +// "Expected to start one zookeeper instance, not " +// + nstarted); +// +// } - if (nstarted != 1) { - - throw new RuntimeException( - "Expected to start one zookeeper instance, not " - + nstarted); - + // Verify zookeeper is running on the local host at the client port. + { + final InetAddress localIpAddr = NicUtil.getInetAddress(null, 0, + null, true); + try { + ZooHelper.ruok(localIpAddr, clientPort); + } catch (Throwable t) { + throw new RuntimeException("Zookeeper not running:: " + + localIpAddr + ":" + clientPort, t); + } } - } catch (Throwable t) { +// } catch (Throwable t) { +// +// // don't leave around the dataDir if the setup fails. +// recursiveDelete(zooDataDir); +// +// throw new RuntimeException(t); +// +// } - // don't leave around the dataDir if the setup fails. - recursiveDelete(zooDataDir); - - throw new RuntimeException(t); - - } - } /* @@ -679,30 +695,30 @@ } - try { - - ZooHelper.kill(clientPort); - - } catch (Throwable t) { - log.error("Could not kill zookeeper: clientPort=" + clientPort - + " : " + t, t); - } - - if (zooDataDir != null && zooDataDir.exists()) { - - /* - * Wait a bit and then try and delete the zookeeper directory. - */ - - try { - Thread.sleep(250); - } catch (InterruptedException e) { - throw new RuntimeException(e); - } - - recursiveDelete(zooDataDir); - - } +// try { +// +// ZooHelper.kill(clientPort); +// +// } catch (Throwable t) { +// log.error("Could not kill zookeeper: clientPort=" + clientPort +// + " : " + t, t); +// } +// +// if (zooDataDir != null && zooDataDir.exists()) { +// +// /* +// * Wait a bit and then try and delete the zookeeper directory. +// */ +// +// try { +// Thread.sleep(250); +// } catch (InterruptedException e) { +// throw new RuntimeException(e); +// } +// +// recursiveDelete(zooDataDir); +// +// } // // Stop the lookup service. // new Thread(new Runnable() { Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java 2011-01-18 15:30:00 UTC (rev 4121) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java 2011-01-18 17:36:58 UTC (rev 4122) @@ -32,6 +32,7 @@ import java.io.PrintWriter; import java.io.StringWriter; import java.net.BindException; +import java.net.InetAddress; import java.net.ServerSocket; import java.util.Arrays; import java.util.List; @@ -60,6 +61,7 @@ import com.bigdata.jini.start.process.ZookeeperProcessHelper; import com.bigdata.jini.util.ConfigMath; import com.bigdata.resources.ResourceFileFilter; +import com.bigdata.util.config.NicUtil; /** * Abstract base class for zookeeper integration tests. @@ -145,7 +147,7 @@ protected final MockListener listener = new MockListener(); - private File dataDir = null; +// private File dataDir = null; // the chosen client port. int clientPort = -1; @@ -157,53 +159,71 @@ if (log.isInfoEnabled()) log.info(getName()); - // find ports that are not in use. - clientPort = getPort(2181/* suggestedPort */); - final int peerPort = getPort(2888/* suggestedPort */); - final int leaderPort = getPort(3888/* suggestedPort */); - final String servers = "1=localhost:" + peerPort + ":" + leaderPort; +// // find ports that are not in use. +// clientPort = getPort(2181/* suggestedPort */); +// final int peerPort = getPort(2888/* suggestedPort */); +// final int leaderPort = getPort(3888/* suggestedPort */); +// final String servers = "1=localhost:" + peerPort + ":" + leaderPort; +// +//// // create a temporary file for zookeeper's state. +//// dataDir = File.createTempFile("test", ".zoo"); +//// // delete the file so that it can be re-created as a directory. +//// dataDir.delete(); +//// // recreate the file as a directory. +//// dataDir.mkdirs(); +// +// final String[] args = new String[] { +// // The configuration file (overrides follow). +// configFile, +// // overrides the clientPort to be unique. +// QuorumPeerMain.class.getName() + "." +// + ZookeeperServerConfiguration.Options.CLIENT_PORT + "=" +// + clientPort, +// // overrides servers declaration. +// QuorumPeerMain.class.getName() + "." +// + ZookeeperServerConfiguration.Options.SERVERS + "=\"" +// + servers + "\"", +//// // overrides the dataDir +//// QuorumPeerMain.class.getName() + "." +//// + ZookeeperServerConfiguration.Options.DATA_DIR +//// + "=new java.io.File(" +//// + ConfigMath.q(dataDir.toString()) + ")"// +// }; +// +// System.err.println("args=" + Arrays.toString(args)); +// +// final Configuration config = ConfigurationProvider.getInstance(args); + +// final int tickTime = (Integer) config.getEntry(QuorumPeerMain.class +// .getName(), ZookeeperServerConfiguration.Options.TICK_TIME, +// Integer.TYPE); - // create a temporary file for zookeeper's state. - dataDir = File.createTempFile("test", ".zoo"); - // delete the file so that it can be re-created as a directory. - dataDir.delete(); - // recreate the file as a directory. - dataDir.mkdirs(); + final int tickTime = Integer.valueOf(System + .getProperty("test.zookeeper.tickTime","2000")); - final String[] args = new String[] { - // The configuration file (overrides follow). - configFile, - // overrides the clientPort to be unique. - QuorumPeerMain.class.getName() + "." - + ZookeeperServerConfiguration.Options.CLIENT_PORT + "=" - + clientPort, - // overrides servers declaration. - QuorumPeerMain.class.getName() + "." - + ZookeeperServerConfiguration.Options.SERVERS + "=\"" - + servers + "\"", - // overrides the dataDir - QuorumPeerMain.class.getName() + "." - + ZookeeperServerConfiguration.Options.DATA_DIR - + "=new java.io.File(" - + ConfigMath.q(dataDir.toString()) + ")"// - }; - - System.err.println("args=" + Arrays.toString(args)); + clientPort = Integer.valueOf(System.getProperty( + "test.zookeeper.clientPort", "2181")); - final Configuration config = ConfigurationProvider.getInstance(args); - - final int tickTime = (Integer) config.getEntry(QuorumPeerMain.class - .getName(), ZookeeperServerConfiguration.Options.TICK_TIME, - Integer.TYPE); - /* - * Note: This is the actual session timeout that the zookeeper service - * will impose on the client. + * Note: This MUST be the actual session timeout that the zookeeper + * service will impose on the client. Some unit tests depend on this. */ this.sessionTimeout = tickTime * 2; - // if necessary, start zookeeper (a server instance). - ZookeeperProcessHelper.startZookeeper(config, listener); + // Verify zookeeper is running on the local host at the client port. + { + final InetAddress localIpAddr = NicUtil.getInetAddress(null, 0, + null, true); + try { + ZooHelper.ruok(localIpAddr, clientPort); + } catch (Throwable t) { + fail("Zookeeper not running:: " + localIpAddr + ":" + + clientPort, t); + } + } + +// // if necessary, start zookeeper (a server instance). +// ZookeeperProcessHelper.startZookeeper(config, listener); zookeeperAccessor = new ZooKeeperAccessor("localhost:" + clientPort, sessionTimeout); @@ -227,8 +247,8 @@ } catch (Throwable t) { - // don't leave around the dataDir if the setup fails. - recursiveDelete(dataDir); +// // don't leave around the dataDir if the setup fails. +// recursiveDelete(dataDir); throw new Exception(t); @@ -256,13 +276,13 @@ } - if (dataDir != null) { +// if (dataDir != null) { +// +// // clean out the zookeeper data dir. +// recursiveDelete(dataDir); +// +// } - // clean out the zookeeper data dir. - recursiveDelete(dataDir); - - } - } catch (Throwable t) { log.error(t, t); @@ -527,48 +547,48 @@ } - /** - * Recursively removes any files and subdirectories and then removes the - * file (or directory) itself. - * <p> - * Note: Files that are not recognized will be logged by the - * {@link ResourceFileFilter}. - * - * @param f - * A file or directory. - */ - private void recursiveDelete(final File f) { +// /** +// * Recursively removes any files and subdirectories and then removes the +// * file (or directory) itself. +// * <p> +// * Note: Files that are not recognized will be logged by the +// * {@link ResourceFileFilter}. +// * +// * @param f +// * A file or directory. +// */ +// private void recursiveDelete(final File f) { +// +// if (f.isDirectory()) { +// +// final File[] children = f.listFiles(); +// +// if (children == null) { +// +// // The directory does not exist. +// return; +// +// } +// +// for (int i = 0; i < children.length; i++) { +// +// recursiveDelete(children[i]); +// +// } +// +// } +// +// if(log.isInfoEnabled()) +// log.info("Removing: " + f); +// +// if (f.exists() && !f.delete()) { +// +// log.warn("Could not remove: " + f); +// +// } +// +// } - if (f.isDirectory()) { - - final File[] children = f.listFiles(); - - if (children == null) { - - // The directory does not exist. - return; - - } - - for (int i = 0; i < children.length; i++) { - - recursiveDelete(children[i]); - - } - - } - - if(log.isInfoEnabled()) - log.info("Removing: " + f); - - if (f.exists() && !f.delete()) { - - log.warn("Could not remove: " + f); - - } - - } - /** * Recursive delete of znodes. * Modified: branches/QUADS_QUERY_BRANCH/build.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/build.properties 2011-01-18 15:30:00 UTC (rev 4121) +++ branches/QUADS_QUERY_BRANCH/build.properties 2011-01-18 17:36:58 UTC (rev 4122) @@ -377,6 +377,8 @@ # after the performance tests have run their course. perf.run.dir=/usr/bigdata/runs -# CI properties. +# CI properties. These must agree with the actual installation directory and zoo.cfg +# file for the zookeeper instance used to run CI. test.zookeeper.installDir=/Users/bryan/zookeeper-3.2.1 - +test.zookeeper.tickTime=2000 +test.zookeeper.clientPort=2888 Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-01-18 15:30:00 UTC (rev 4121) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-01-18 17:36:58 UTC (rev 4122) @@ -1606,7 +1606,6 @@ <!-- be running on the localhost. --> <property name="test.zookeeper.tickTime" value="2000" /> <property name="test.zookeeper.clientPort" value="2888" /> - <property name="test.zookeeper.leaderPort" value="3888" /> <!-- The zookeeper install directory. --> <property name="test.zookeeper.installDir" value="${zookeeper.installDir}" /> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-01-18 21:50:48
|
Revision: 4124 http://bigdata.svn.sourceforge.net/bigdata/?rev=4124&view=rev Author: thompsonbry Date: 2011-01-18 21:50:42 +0000 (Tue, 18 Jan 2011) Log Message: ----------- Put the parameterized clientPort and tickTime into the environment when running junit. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/DestroyTransactionService.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java branches/QUADS_QUERY_BRANCH/build.xml Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java 2011-01-18 20:24:39 UTC (rev 4123) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java 2011-01-18 21:50:42 UTC (rev 4124) @@ -86,13 +86,13 @@ // ACL used for the unit tests. protected final List<ACL> acl = Ids.OPEN_ACL_UNSAFE; - Configuration config; + protected Configuration config; final protected MockListener listener = new MockListener(); - JiniFederation fed; + protected JiniFederation<?> fed; - String zrootname = null; + protected String zrootname = null; public void setUp() throws Exception { Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/DestroyTransactionService.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/DestroyTransactionService.java 2011-01-18 20:24:39 UTC (rev 4123) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/DestroyTransactionService.java 2011-01-18 21:50:42 UTC (rev 4124) @@ -56,11 +56,11 @@ public static void main(String[] args) throws InterruptedException, RemoteException { - JiniFederation fed = JiniClient.newInstance(args).connect(); + final JiniFederation<?> fed = JiniClient.newInstance(args).connect(); try { - IService service = fed.getTransactionService(); + final IService service = fed.getTransactionService(); if (service == null) { Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java 2011-01-18 20:24:39 UTC (rev 4123) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java 2011-01-18 21:50:42 UTC (rev 4124) @@ -27,21 +27,17 @@ package com.bigdata.zookeeper; -import java.io.File; import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.net.BindException; import java.net.InetAddress; import java.net.ServerSocket; -import java.util.Arrays; import java.util.List; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantLock; import junit.framework.TestCase2; -import net.jini.config.Configuration; -import net.jini.config.ConfigurationProvider; import org.apache.log4j.Level; import org.apache.zookeeper.CreateMode; @@ -53,14 +49,9 @@ import org.apache.zookeeper.KeeperException.SessionExpiredException; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.data.ACL; -import org.apache.zookeeper.server.quorum.QuorumPeerMain; import com.bigdata.jini.start.MockListener; -import com.bigdata.jini.start.config.ZookeeperServerConfiguration; import com.bigdata.jini.start.process.ProcessHelper; -import com.bigdata.jini.start.process.ZookeeperProcessHelper; -import com.bigdata.jini.util.ConfigMath; -import com.bigdata.resources.ResourceFileFilter; import com.bigdata.util.config.NicUtil; /** @@ -150,7 +141,7 @@ // private File dataDir = null; // the chosen client port. - int clientPort = -1; + protected int clientPort = -1; public void setUp() throws Exception { Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-01-18 20:24:39 UTC (rev 4123) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-01-18 21:50:42 UTC (rev 4124) @@ -1846,6 +1846,9 @@ <sysproperty key="app.home" value="${app.home}" /> <sysproperty key="default.nic" value="${default.nic}" /> + <sysproperty key="test.zookeeper.clientPort" value="${test.zookeeper.clientPort}" /> + <sysproperty key="test.zookeeper.tickTime" value="${test.zookeeper.tickTime}" /> + <sysproperty key="classserver.jar" value="${dist.lib}/classserver.jar" /> <sysproperty key="colt.jar" value="${dist.lib}/colt.jar" /> <sysproperty key="cweb-commons.jar" value="${dist.lib}/cweb-commons.jar" /> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-01-19 18:22:44
|
Revision: 4129 http://bigdata.svn.sourceforge.net/bigdata/?rev=4129&view=rev Author: thompsonbry Date: 2011-01-19 18:22:38 +0000 (Wed, 19 Jan 2011) Log Message: ----------- I had the wrong default client port for zookeeper in the code in several places. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java branches/QUADS_QUERY_BRANCH/build.properties branches/QUADS_QUERY_BRANCH/build.xml Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java 2011-01-19 16:13:24 UTC (rev 4128) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java 2011-01-19 18:22:38 UTC (rev 4129) @@ -126,7 +126,7 @@ // ZookeeperProcessHelper.startZookeeper(config, listener); final int clientPort = Integer.valueOf(System - .getProperty("test.zookeeper.clientPort","2181")); + .getProperty("test.zookeeper.clientPort","2081")); // Verify zookeeper is running on the local host at the client port. { Modified: branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java 2011-01-19 16:13:24 UTC (rev 4128) +++ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/test/com/bigdata/zookeeper/AbstractZooTestCase.java 2011-01-19 18:22:38 UTC (rev 4129) @@ -30,9 +30,7 @@ import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; -import java.net.BindException; import java.net.InetAddress; -import java.net.ServerSocket; import java.util.List; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantLock; @@ -73,33 +71,33 @@ super(name); } - /** - * Return an open port on current machine. Try the suggested port first. If - * suggestedPort is zero, just select a random port - */ - protected static int getPort(final int suggestedPort) throws IOException { - - ServerSocket openSocket; - - try { - - openSocket = new ServerSocket(suggestedPort); - - } catch (BindException ex) { - - // the port is busy, so look for a random open port - openSocket = new ServerSocket(0); - - } +// /** +// * Return an open port on current machine. Try the suggested port first. If +// * suggestedPort is zero, just select a random port +// */ +// protected static int getPort(final int suggestedPort) throws IOException { +// +// ServerSocket openSocket; +// +// try { +// +// openSocket = new ServerSocket(suggestedPort); +// +// } catch (BindException ex) { +// +// // the port is busy, so look for a random open port +// openSocket = new ServerSocket(0); +// +// } +// +// final int port = openSocket.getLocalPort(); +// +// openSocket.close(); +// +// return port; +// +// } - final int port = openSocket.getLocalPort(); - - openSocket.close(); - - return port; - - } - /** * A configuration file used by some of the unit tests in this package. It * contains a description of the zookeeper server instance in case we need @@ -193,7 +191,7 @@ .getProperty("test.zookeeper.tickTime","2000")); clientPort = Integer.valueOf(System.getProperty( - "test.zookeeper.clientPort", "2181")); + "test.zookeeper.clientPort", "2081")); /* * Note: This MUST be the actual session timeout that the zookeeper @@ -202,9 +200,9 @@ this.sessionTimeout = tickTime * 2; // Verify zookeeper is running on the local host at the client port. + final InetAddress localIpAddr = NicUtil.getInetAddress(null, 0, + null, true); { - final InetAddress localIpAddr = NicUtil.getInetAddress(null, 0, - null, true); try { ZooHelper.ruok(localIpAddr, clientPort); } catch (Throwable t) { @@ -216,7 +214,9 @@ // // if necessary, start zookeeper (a server instance). // ZookeeperProcessHelper.startZookeeper(config, listener); - zookeeperAccessor = new ZooKeeperAccessor("localhost:" + clientPort, sessionTimeout); + zookeeperAccessor = new ZooKeeperAccessor(localIpAddr + .getHostAddress() + + ":" + clientPort, sessionTimeout); zookeeper = zookeeperAccessor.getZookeeper(); Modified: branches/QUADS_QUERY_BRANCH/build.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/build.properties 2011-01-19 16:13:24 UTC (rev 4128) +++ branches/QUADS_QUERY_BRANCH/build.properties 2011-01-19 18:22:38 UTC (rev 4129) @@ -381,4 +381,4 @@ # file for the zookeeper instance used to run CI. test.zookeeper.installDir=/Users/bryan/zookeeper-3.2.1 test.zookeeper.tickTime=2000 -test.zookeeper.clientPort=2181 +test.zookeeper.clientPort=2081 Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-01-19 16:13:24 UTC (rev 4128) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-01-19 18:22:38 UTC (rev 4129) @@ -1604,8 +1604,8 @@ <!-- tests. These properties MUST be consistent with the actual --> <!-- zookeeper configuration. Zookeeper is assumed (by the tests) to --> <!-- be running on the localhost. --> - <property name="test.zookeeper.tickTime" value="2000" /> - <property name="test.zookeeper.clientPort" value="2888" /> + <property name="test.zookeeper.tickTime" value="${test.zookeeper.tickTime}" /> + <property name="test.zookeeper.clientPort" value="${test.zookeeper.clientPort}" /> <!-- The zookeeper install directory. --> <property name="test.zookeeper.installDir" value="${zookeeper.installDir}" /> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2011-01-20 18:58:16
|
Revision: 4142 http://bigdata.svn.sourceforge.net/bigdata/?rev=4142&view=rev Author: mrpersonick Date: 2011-01-20 18:58:10 +0000 (Thu, 20 Jan 2011) Log Message: ----------- turned constraints into conditional routing ops instead of annotations Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/CompareBOp.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/Rule2BOpUtility.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/CompareBOp.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/CompareBOp.java 2011-01-20 17:49:16 UTC (rev 4141) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/CompareBOp.java 2011-01-20 18:58:10 UTC (rev 4142) @@ -89,7 +89,8 @@ final IV right = ((IValueExpression<IV>) get(1)).get(s); if (left == null || right == null) - return true; // not yet bound. +// return true; // not yet bound. + return false; // no longer allow unbound values final CompareOp op = (CompareOp) getProperty(Annotations.OP); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/Rule2BOpUtility.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/Rule2BOpUtility.java 2011-01-20 17:49:16 UTC (rev 4141) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/Rule2BOpUtility.java 2011-01-20 18:58:10 UTC (rev 4142) @@ -620,12 +620,12 @@ // } // just add all the constraints to the very last tail for now - if (i == (order.length-1) && rule.getConstraintCount() > 0) { - final Iterator<IConstraint> it = rule.getConstraints(); - while (it.hasNext()) { - constraints.add(it.next()); - } - } +// if (i == (order.length-1) && rule.getConstraintCount() > 0) { +// final Iterator<IConstraint> it = rule.getConstraints(); +// while (it.hasNext()) { +// constraints.add(it.next()); +// } +// } // annotations for this join. final List<NV> anns = new LinkedList<NV>(); @@ -730,6 +730,24 @@ } + if (rule.getConstraintCount() > 0) { + final Iterator<IConstraint> it = rule.getConstraints(); + while (it.hasNext()) { + final IConstraint c = it.next(); + final int condId = idFactory.incrementAndGet(); + final PipelineOp condOp = applyQueryHints( + new ConditionalRoutingOp(new BOp[]{left}, + NV.asMap(new NV[]{// + new NV(BOp.Annotations.BOP_ID,condId), + new NV(ConditionalRoutingOp.Annotations.CONDITION, c), + })), queryHints); + left = condOp; + if (log.isDebugEnabled()) { + log.debug("adding conditional routing op: " + condOp); + } + } + } + if (log.isInfoEnabled()) { // just for now while i'm debugging log.info("rule=" + rule + ":::query=" This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-02-09 17:12:37
|
Revision: 4186 http://bigdata.svn.sourceforge.net/bigdata/?rev=4186&view=rev Author: thompsonbry Date: 2011-02-09 17:12:31 +0000 (Wed, 09 Feb 2011) Log Message: ----------- Fixed the deep copy constructor signature for Bind(). Fixed the set of unit tests run by build.xml (CI) (I reconciled them with com.bigdata.TestAll). Fixed unit test for DistinctBindingSets to specify the necessary annotations. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/Bind.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/solutions/TestDistinctBindingSets.java branches/QUADS_QUERY_BRANCH/build.xml Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/Bind.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/Bind.java 2011-02-09 17:00:01 UTC (rev 4185) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/Bind.java 2011-02-09 17:12:31 UTC (rev 4186) @@ -18,7 +18,7 @@ /** * Required deep copy constructor. */ - public Bind(BOpBase op) { + public Bind(Bind<E> op) { super(op); } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/solutions/TestDistinctBindingSets.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/solutions/TestDistinctBindingSets.java 2011-02-09 17:00:01 UTC (rev 4185) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/solutions/TestDistinctBindingSets.java 2011-02-09 17:12:31 UTC (rev 4186) @@ -36,6 +36,7 @@ import com.bigdata.bop.BOp; import com.bigdata.bop.BOpContext; +import com.bigdata.bop.BOpEvaluationContext; import com.bigdata.bop.Constant; import com.bigdata.bop.IBindingSet; import com.bigdata.bop.IConstant; @@ -178,6 +179,10 @@ NV.asMap(new NV[]{// new NV(DistinctBindingSetOp.Annotations.BOP_ID,distinctId),// new NV(DistinctBindingSetOp.Annotations.VARIABLES,new IVariable[]{x}),// + new NV(MemorySortOp.Annotations.EVALUATION_CONTEXT, + BOpEvaluationContext.CONTROLLER),// +// new NV(MemorySortOp.Annotations.SHARED_STATE, +// true),// })); // the expected solutions Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-02-09 17:00:01 UTC (rev 4185) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-02-09 17:12:31 UTC (rev 4186) @@ -1706,29 +1706,27 @@ <test name="com.bigdata.striterator.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.counters.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.rawstore.TestAll" todir="${test.results.dir}" unless="testName" /> - <test name="com.bigdata.btree.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.concurrent.TestAll" todir="${test.results.dir}" unless="testName" /> - <test name="com.bigdata.quorum.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.ha.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.io.writecache.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.journal.TestAll" todir="${test.results.dir}" unless="testName" /> -<!-- To run only specific test suites. - <test name="com.bigdata.journal.TestWORMStrategy" todir="${test.results.dir}" unless="testName" /> - <test name="com.bigdata.rwstore.TestRWJournal" todir="${test.results.dir}" unless="testName" /> - --> <!-- Performance of this test suite has regressed and needs to be investigated. - <test name="com.bigdata.journal.ha.TestAll" todir="${test.results.dir}" unless="testName" /> + <test name="com.bigdata.journal.ha.TestAll" todir="${test.results.dir}" unless="testName" /> --> -<!-- end of specific journal test suite runs. --> - <test name="com.bigdata.resources.TestAll" todir="${test.results.dir}" unless="testName" /> + <test name="com.bigdata.relation.TestAll" todir="${test.results.dir}" unless="testName" /> + <test name="com.bigdata.bop.TestAll" todir="${test.results.dir}" unless="testName" /> + <test name="com.bigdata.relation.rule.eval.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.mdi.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.service.TestAll" todir="${test.results.dir}" unless="testName" /> + <test name="com.bigdata.bop.fed.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.sparse.TestAll" todir="${test.results.dir}" unless="testName" /> <test name="com.bigdata.search.TestAll" todir="${test.results.dir}" unless="testName" /> - <test name="com.bigdata.relation.TestAll" todir="${test.results.dir}" unless="testName" /> + <!-- not suppported yet. + <test name="com.bigdata.bfs.TestAll" todir="${test.results.dir}" unless="testName" /> + --> <!-- See https://sourceforge.net/apps/trac/bigdata/ticket/53 --> <test name="com.bigdata.jini.TestAll" todir="${test.results.dir}" unless="testName" /> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2011-02-15 16:35:16
|
Revision: 4199 http://bigdata.svn.sourceforge.net/bigdata/?rev=4199&view=rev Author: mrpersonick Date: 2011-02-15 16:35:10 +0000 (Tue, 15 Feb 2011) Log Message: ----------- pruning variable bindings before they are materialized into BigdataValues Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataBindingSetResolverator.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataBindingSetResolverator.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataBindingSetResolverator.java 2011-02-15 16:21:09 UTC (rev 4198) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BigdataBindingSetResolverator.java 2011-02-15 16:35:10 UTC (rev 4199) @@ -30,6 +30,8 @@ public class BigdataBindingSetResolverator extends AbstractChunkedResolverator<IBindingSet, IBindingSet, AbstractTripleStore> { + + private final IVariable[] required; /** * @@ -45,13 +47,23 @@ */ public BigdataBindingSetResolverator(final AbstractTripleStore db, final IChunkedOrderedIterator<IBindingSet> src) { + + this(db, src, null); + + } + + public BigdataBindingSetResolverator(final AbstractTripleStore db, + final IChunkedOrderedIterator<IBindingSet> src, + final IVariable[] required) { super(db, src, new BlockingBuffer<IBindingSet[]>( db.getChunkOfChunksCapacity(), db.getChunkCapacity(), db.getChunkTimeout(), TimeUnit.MILLISECONDS)); - + + this.required = required; + } /** @@ -87,28 +99,49 @@ assert bindingSet != null; - final Iterator<Map.Entry<IVariable, IConstant>> itr = bindingSet - .iterator(); + if (required == null) { + + final Iterator<Map.Entry<IVariable, IConstant>> itr = + bindingSet.iterator(); - while (itr.hasNext()) { - - final Map.Entry<IVariable, IConstant> entry = itr.next(); - - final IV iv = (IV) entry.getValue().get(); - - if (iv == null) { - - throw new RuntimeException("NULL? : var=" + entry.getKey() - + ", " + bindingSet); - - } - - ids.add(iv); - + while (itr.hasNext()) { + + final Map.Entry<IVariable, IConstant> entry = itr.next(); + + final IV iv = (IV) entry.getValue().get(); + + if (iv == null) { + + throw new RuntimeException("NULL? : var=" + entry.getKey() + + ", " + bindingSet); + + } + + ids.add(iv); + + } + + } else { + + for (IVariable v : required) { + + final IV iv = (IV) bindingSet.get(v).get(); + + if (iv == null) { + + throw new RuntimeException("NULL? : var=" + v + + ", " + bindingSet); + + } + + ids.add(iv); + + } + } } - + if (log.isInfoEnabled()) log.info("Resolving " + ids.size() + " term identifiers"); @@ -171,14 +204,19 @@ if (terms == null) throw new IllegalArgumentException(); - final IBindingSet bindingSet = solution; - - if(bindingSet == null) { + if(solution == null) { throw new IllegalStateException("BindingSet was not materialized"); } + final IBindingSet bindingSet; + if (required == null) { + bindingSet = solution; + } else { + bindingSet = solution.copy(required); + } + final Iterator<Map.Entry<IVariable, IConstant>> itr = bindingSet .iterator(); @@ -214,7 +252,7 @@ value)); } - + return bindingSet; } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-02-15 16:21:09 UTC (rev 4198) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-02-15 16:35:10 UTC (rev 4199) @@ -833,7 +833,7 @@ * Begin native bigdata evaluation. */ CloseableIteration<BindingSet, QueryEvaluationException> result = doEvaluateNatively( - query, bs, queryEngine);// , sesameFilters); + query, bs, queryEngine, required);// , sesameFilters); /* * Use the basic filter iterator for any remaining filters which will be @@ -868,7 +868,7 @@ CloseableIteration<BindingSet, QueryEvaluationException> doEvaluateNatively(final PipelineOp query, final BindingSet bs, - final QueryEngine queryEngine + final QueryEngine queryEngine, final IVariable[] required // , final Collection<Filter> sesameFilters ) throws QueryEvaluationException { @@ -883,7 +883,7 @@ * Wrap up the native bigdata query solution iterator as Sesame * compatible iteration with materialized RDF Values. */ - return wrapQuery(runningQuery);//, sesameFilters); + return wrapQuery(runningQuery, required);//, sesameFilters); } catch (UnsupportedOperatorException t) { if (runningQuery != null) { @@ -919,7 +919,7 @@ * @throws QueryEvaluationException */ CloseableIteration<BindingSet, QueryEvaluationException> wrapQuery( - final IRunningQuery runningQuery + final IRunningQuery runningQuery, final IVariable[] required ) throws QueryEvaluationException { // The iterator draining the query solutions. @@ -938,7 +938,7 @@ // Convert bigdata binding sets to Sesame binding sets. new Bigdata2Sesame2BindingSetIterator<QueryEvaluationException>( // Materialize IVs as RDF Values. - new BigdataBindingSetResolverator(database, it2).start( + new BigdataBindingSetResolverator(database, it2, required).start( database.getExecutorService()))); return result; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2011-02-22 23:40:59
|
Revision: 4230 http://bigdata.svn.sourceforge.net/bigdata/?rev=4230&view=rev Author: mrpersonick Date: 2011-02-22 23:40:52 +0000 (Tue, 22 Feb 2011) Log Message: ----------- refactor constraints -> value expressions Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestBOpUtility.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/joinGraph/TestJoinGraphOnBSBMData.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/TestInlineConstraints.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleExpansion.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPORelation.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestBOpUtility.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestBOpUtility.java 2011-02-22 23:39:14 UTC (rev 4229) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestBOpUtility.java 2011-02-22 23:40:52 UTC (rev 4230) @@ -32,16 +32,30 @@ import junit.framework.TestCase2; +import org.openrdf.model.Literal; +import org.openrdf.model.URI; +import org.openrdf.model.ValueFactory; +import org.openrdf.model.vocabulary.RDF; +import org.openrdf.model.vocabulary.RDFS; +import org.openrdf.query.QueryLanguage; +import org.openrdf.query.TupleQuery; +import org.openrdf.query.TupleQueryResult; +import org.openrdf.repository.Repository; +import org.openrdf.repository.RepositoryConnection; +import org.openrdf.repository.sail.SailRepository; +import org.openrdf.sail.Sail; +import org.openrdf.sail.memory.MemoryStore; + import com.bigdata.bop.BOp; -import com.bigdata.bop.BOpBase; import com.bigdata.bop.BOpUtility; import com.bigdata.bop.Constant; import com.bigdata.bop.IBindingSet; -import com.bigdata.bop.IConstraint; import com.bigdata.bop.IValueExpression; import com.bigdata.bop.Var; -import com.bigdata.bop.constraint.BOpConstraint; -import com.bigdata.bop.constraint.OR; +import com.bigdata.rdf.internal.IV; +import com.bigdata.rdf.internal.constraints.OrBOp; +import com.bigdata.rdf.internal.constraints.ValueExpressionBOp; +import com.bigdata.rdf.store.BD; /** * Unit tests for {@link BOpUtility}. @@ -76,18 +90,18 @@ private BOp generateBOp(final int count,final IValueExpression<?> a) { - IConstraint bop = null; + IValueExpression bop = null; for (int i = 0; i < count; i++) { - final IConstraint c = new DummyConstraint( + final IValueExpression c = new DummyVE( new BOp[] { a, new Constant<Integer>(i) }, null/*annotations*/); if (bop == null) { bop = c; } else { - bop = new OR(c, bop); + bop = new OrBOp(c, bop); } } @@ -132,24 +146,66 @@ } - private static class DummyConstraint extends BOpConstraint { + private static class DummyVE extends ValueExpressionBOp { /** * */ private static final long serialVersionUID = 1942393209821562541L; - public DummyConstraint(BOp[] args, Map<String, Object> annotations) { + public DummyVE(BOp[] args, Map<String, Object> annotations) { super(args, annotations); } - public DummyConstraint(BOpBase op) { + public DummyVE(ValueExpressionBOp op) { super(op); } - public boolean accept(IBindingSet bindingSet) { + public IV get(IBindingSet bindingSet) { throw new RuntimeException(); } } + + public void testOpenWorldEq() throws Exception { + + final Sail sail = new MemoryStore(); + final Repository repo = new SailRepository(sail); + repo.initialize(); + final RepositoryConnection cxn = repo.getConnection(); + + try { + + final ValueFactory vf = sail.getValueFactory(); + + final URI mike = vf.createURI(BD.NAMESPACE + "mike"); + final URI age = vf.createURI(BD.NAMESPACE + "age"); + final Literal mikeAge = vf.createLiteral(34); + + cxn.add(vf.createStatement(mike, RDF.TYPE, RDFS.RESOURCE)); + cxn.add(vf.createStatement(mike, age, mikeAge)); + + final String query = + "select * " + + "where { " + + " ?s ?p ?o . " + + " filter (?o < 40) " + + "}"; + + final TupleQuery tupleQuery = + cxn.prepareTupleQuery(QueryLanguage.SPARQL, query); + + final TupleQueryResult result = tupleQuery.evaluate(); + while (result.hasNext()) { + System.err.println(result.next()); + } + + + } finally { + cxn.close(); + repo.shutDown(); + } + + + } } Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/joinGraph/TestJoinGraphOnBSBMData.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/joinGraph/TestJoinGraphOnBSBMData.java 2011-02-22 23:39:14 UTC (rev 4229) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/joinGraph/TestJoinGraphOnBSBMData.java 2011-02-22 23:40:52 UTC (rev 4230) @@ -27,7 +27,7 @@ import com.bigdata.rdf.internal.constraints.MathBOp; import com.bigdata.rdf.internal.constraints.NotBOp; import com.bigdata.rdf.internal.constraints.SameTermBOp; -import com.bigdata.rdf.internal.constraints.ValueExpressionConstraint; +import com.bigdata.rdf.internal.constraints.Constraint; import com.bigdata.rdf.model.BigdataURI; import com.bigdata.rdf.model.BigdataValue; import com.bigdata.rdf.model.BigdataValueFactory; @@ -438,7 +438,7 @@ // the constraints on the join graph. constraints = new IConstraint[ves.length]; for (int i = 0; i < ves.length; i++) { - constraints[i] = ValueExpressionConstraint.wrap(ves[i]); + constraints[i] = Constraint.wrap(ves[i]); } } Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/TestInlineConstraints.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/TestInlineConstraints.java 2011-02-22 23:39:14 UTC (rev 4229) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/TestInlineConstraints.java 2011-02-22 23:40:52 UTC (rev 4230) @@ -180,7 +180,7 @@ }, // constraints on the rule. new IConstraint[] { - ValueExpressionConstraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.GT)) + Constraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.GT)) } ); @@ -286,7 +286,7 @@ }, // constraints on the rule. new IConstraint[] { - ValueExpressionConstraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.GE)) + Constraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.GE)) }); try { @@ -393,7 +393,7 @@ }, // constraints on the rule. new IConstraint[] { - ValueExpressionConstraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.LT)) + Constraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.LT)) }); if (log.isInfoEnabled()) @@ -501,7 +501,7 @@ }, // constraints on the rule. new IConstraint[] { - ValueExpressionConstraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.LE)) + Constraint.wrap(new CompareBOp(a, new Constant<IV>(_35.getIV()), CompareOp.LE)) }); if (log.isInfoEnabled()) @@ -618,7 +618,7 @@ }, // constraints on the rule. new IConstraint[] { - ValueExpressionConstraint.wrap(new CompareBOp(a, new MathBOp(dAge, new Constant<IV>(_5.getIV()), MathOp.PLUS), CompareOp.GT)) + Constraint.wrap(new CompareBOp(a, new MathBOp(dAge, new Constant<IV>(_5.getIV()), MathOp.PLUS), CompareOp.GT)) }); try { @@ -731,7 +731,7 @@ }, // constraints on the rule. new IConstraint[] { - ValueExpressionConstraint.wrap(new CompareBOp(a, new Constant<IV>(l2.getIV()), CompareOp.GT)) + Constraint.wrap(new CompareBOp(a, new Constant<IV>(l2.getIV()), CompareOp.GT)) }); try { Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleExpansion.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleExpansion.java 2011-02-22 23:39:14 UTC (rev 4229) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/rules/TestRuleExpansion.java 2011-02-22 23:40:52 UTC (rev 4230) @@ -42,6 +42,7 @@ import com.bigdata.bop.IVariable; import com.bigdata.bop.IVariableOrConstant; import com.bigdata.bop.Var; +import com.bigdata.bop.constraint.Constraint; import com.bigdata.bop.constraint.NEConstant; import com.bigdata.bop.joinGraph.IEvaluationPlan; import com.bigdata.bop.joinGraph.IEvaluationPlanFactory; @@ -322,7 +323,7 @@ }, // true, // distinct new IConstraint[] { - new NEConstant(_p, sameAs) + Constraint.wrap(new NEConstant(_p, sameAs)) } ); Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPORelation.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPORelation.java 2011-02-22 23:39:14 UTC (rev 4229) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestSPORelation.java 2011-02-22 23:40:52 UTC (rev 4230) @@ -39,6 +39,7 @@ import com.bigdata.bop.IVariableOrConstant; import com.bigdata.bop.Var; import com.bigdata.bop.bindingSet.ArrayBindingSet; +import com.bigdata.bop.constraint.Constraint; import com.bigdata.bop.constraint.NE; import com.bigdata.bop.joinGraph.IEvaluationPlan; import com.bigdata.bop.joinGraph.IEvaluationPlanFactory; @@ -149,7 +150,7 @@ new P(relation, var("v"), rdfType, var("u")) // },// new IConstraint[] { - new NE(var("u"),var("x")) + Constraint.wrap(new NE(var("u"),var("x"))) } ); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-02-22 23:39:14 UTC (rev 4229) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-02-22 23:40:52 UTC (rev 4230) @@ -72,13 +72,6 @@ import com.bigdata.bop.NV; import com.bigdata.bop.PipelineOp; import com.bigdata.bop.ap.Predicate; -import com.bigdata.bop.constraint.AND; -import com.bigdata.bop.constraint.BOUND; -import com.bigdata.bop.constraint.EQ; -import com.bigdata.bop.constraint.INBinarySearch; -import com.bigdata.bop.constraint.NE; -import com.bigdata.bop.constraint.NOT; -import com.bigdata.bop.constraint.OR; import com.bigdata.bop.engine.IRunningQuery; import com.bigdata.bop.engine.QueryEngine; import com.bigdata.bop.solutions.ISortOrder; @@ -96,7 +89,7 @@ import com.bigdata.rdf.internal.constraints.NotBOp; import com.bigdata.rdf.internal.constraints.OrBOp; import com.bigdata.rdf.internal.constraints.SameTermBOp; -import com.bigdata.rdf.internal.constraints.ValueExpressionConstraint; +import com.bigdata.rdf.internal.constraints.Constraint; import com.bigdata.rdf.lexicon.LexiconRelation; import com.bigdata.rdf.model.BigdataValue; import com.bigdata.rdf.sail.BigdataSail.Options; @@ -206,18 +199,7 @@ * either as JOINs (generating an additional {@link IPredicate} in the * {@link IRule}) or as an {@link INBinarySearch} constraint, where the inclusion set is * pre-populated by some operation on the {@link LexiconRelation}. - * <dl> - * <dt>EQ</dt> - * <dd>Translated into an {@link EQ} constraint on an {@link IPredicate}.</dd> - * <dt>NE</dt> - * <dd>Translated into an {@link NE} constraint on an {@link IPredicate}.</dd> - * <dt>IN</dt> - * <dd>Translated into an {@link INBinarySearch} constraint on an {@link IPredicate}.</dd> - * <dt>OR</dt> - * <dd>Translated into an {@link OR} constraint on an {@link IPredicate}.</dd> - * <dt></dt> - * <dd></dd> - * </dl> + * <p> * <h2>Magic predicates</h2> * <p> * {@link BD#SEARCH} is the only magic predicate at this time. When the object @@ -2051,7 +2033,7 @@ private IConstraint toConstraint(final ValueExpr ve) { final IValueExpression<IV> veBOp = toVE(ve); - return ValueExpressionConstraint.wrap(veBOp); + return Constraint.wrap(veBOp); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2011-02-24 17:34:48
|
Revision: 4244 http://bigdata.svn.sourceforge.net/bigdata/?rev=4244&view=rev Author: mrpersonick Date: 2011-02-24 17:34:41 +0000 (Thu, 24 Feb 2011) Log Message: ----------- added a few more VE bops Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsLiteralBOp.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsBNodeBOp.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsURIBOp.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrBOp.java Added: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsBNodeBOp.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsBNodeBOp.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsBNodeBOp.java 2011-02-24 17:34:41 UTC (rev 4244) @@ -0,0 +1,91 @@ +/* + +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ +package com.bigdata.rdf.internal.constraints; + +import java.util.Map; + +import com.bigdata.bop.BOp; +import com.bigdata.bop.IBindingSet; +import com.bigdata.bop.IValueExpression; +import com.bigdata.bop.IVariable; +import com.bigdata.rdf.error.SparqlTypeErrorException; +import com.bigdata.rdf.internal.IV; +import com.bigdata.rdf.internal.XSDBooleanIV; + +/** + * Imposes the constraint <code>isBNode(x)</code>. + */ +public class IsBNodeBOp extends ValueExpressionBOp + implements IValueExpression<IV> { + + /** + * + */ + private static final long serialVersionUID = 3125106876006900339L; + + public IsBNodeBOp(final IVariable<IV> x) { + + this(new BOp[] { x }, null/*annocations*/); + + } + + /** + * Required shallow copy constructor. + */ + public IsBNodeBOp(final BOp[] args, final Map<String, Object> anns) { + + super(args, anns); + + if (args.length != 1 || args[0] == null) + throw new IllegalArgumentException(); + + } + + /** + * Required deep copy constructor. + */ + public IsBNodeBOp(final IsBNodeBOp op) { + super(op); + } + + public boolean accept(final IBindingSet bs) { + + final IV iv = get(0).get(bs); + + // not yet bound + if (iv == null) + throw new SparqlTypeErrorException(); + + return iv.isBNode(); + + } + + public IV get(final IBindingSet bs) { + + return accept(bs) ? XSDBooleanIV.TRUE : XSDBooleanIV.FALSE; + + } + +} Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsLiteralBOp.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsLiteralBOp.java 2011-02-24 17:34:11 UTC (rev 4243) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsLiteralBOp.java 2011-02-24 17:34:41 UTC (rev 4244) @@ -70,7 +70,7 @@ super(op); } - public boolean accept(IBindingSet bs) { + public boolean accept(final IBindingSet bs) { final IV iv = get(0).get(bs); Added: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsURIBOp.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsURIBOp.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/IsURIBOp.java 2011-02-24 17:34:41 UTC (rev 4244) @@ -0,0 +1,91 @@ +/* + +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ +package com.bigdata.rdf.internal.constraints; + +import java.util.Map; + +import com.bigdata.bop.BOp; +import com.bigdata.bop.IBindingSet; +import com.bigdata.bop.IValueExpression; +import com.bigdata.bop.IVariable; +import com.bigdata.rdf.error.SparqlTypeErrorException; +import com.bigdata.rdf.internal.IV; +import com.bigdata.rdf.internal.XSDBooleanIV; + +/** + * Imposes the constraint <code>isURI(x)</code>. + */ +public class IsURIBOp extends ValueExpressionBOp + implements IValueExpression<IV> { + + /** + * + */ + private static final long serialVersionUID = 3125106876006900339L; + + public IsURIBOp(final IVariable<IV> x) { + + this(new BOp[] { x }, null/*annocations*/); + + } + + /** + * Required shallow copy constructor. + */ + public IsURIBOp(final BOp[] args, final Map<String, Object> anns) { + + super(args, anns); + + if (args.length != 1 || args[0] == null) + throw new IllegalArgumentException(); + + } + + /** + * Required deep copy constructor. + */ + public IsURIBOp(final IsURIBOp op) { + super(op); + } + + public boolean accept(final IBindingSet bs) { + + final IV iv = get(0).get(bs); + + // not yet bound + if (iv == null) + throw new SparqlTypeErrorException(); + + return iv.isURI(); + + } + + public IV get(final IBindingSet bs) { + + return accept(bs) ? XSDBooleanIV.TRUE : XSDBooleanIV.FALSE; + + } + +} Added: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrBOp.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrBOp.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrBOp.java 2011-02-24 17:34:41 UTC (rev 4244) @@ -0,0 +1,118 @@ +/* + +Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ +package com.bigdata.rdf.internal.constraints; + +import java.util.Map; + +import org.openrdf.model.URI; + +import com.bigdata.bop.BOp; +import com.bigdata.bop.IBindingSet; +import com.bigdata.bop.IValueExpression; +import com.bigdata.bop.IVariable; +import com.bigdata.rdf.error.SparqlTypeErrorException; +import com.bigdata.rdf.internal.IV; +import com.bigdata.rdf.internal.StrIV; +import com.bigdata.rdf.lexicon.LexiconRelation; +import com.bigdata.rdf.model.BigdataLiteral; +import com.bigdata.rdf.model.BigdataValueFactory; +import com.bigdata.rdf.store.AbstractTripleStore; + +/** + * Imposes the constraint <code>isURI(x)</code>. + */ +public class StrBOp extends ValueExpressionBOp + implements IValueExpression<IV> { + + /** + * + */ + private static final long serialVersionUID = 3125106876006900339L; + + public StrBOp(final IVariable<IV> x) { + + this(new BOp[] { x }, null/*annocations*/); + + } + + /** + * Required shallow copy constructor. + */ + public StrBOp(final BOp[] args, final Map<String, Object> anns) { + + super(args, anns); + + if (args.length != 1 || args[0] == null) + throw new IllegalArgumentException(); + + } + + /** + * Required deep copy constructor. + */ + public StrBOp(final StrBOp op) { + super(op); + } + + public IV get(final IBindingSet bs) { + + final IV iv = get(0).get(bs); + + // not yet bound + if (iv == null) + throw new SparqlTypeErrorException(); + + // uh oh how the heck do I get my hands on this? big change + final AbstractTripleStore db = null; + + // use to materialize my terms + final LexiconRelation lex = db.getLexiconRelation(); + + // use to create my simple literals + final BigdataValueFactory vf = db.getValueFactory(); + + if (iv.isURI()) { + // return new simple literal using URI label + final URI uri = (URI) iv.asValue(lex); + final BigdataLiteral str = vf.createLiteral(uri.toString()); + return new StrIV(iv, str); + } else if (iv.isLiteral()) { + final BigdataLiteral lit = (BigdataLiteral) iv.asValue(lex); + if (lit.getDatatype() == null && lit.getLanguage() == null) { + // if simple literal return it + return iv; + } + else { + // else return new simple literal using Literal.getLabel + final BigdataLiteral str = vf.createLiteral(lit.getLabel()); + return new StrIV(iv, str); + } + } else { + throw new SparqlTypeErrorException(); + } + + } + +} Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-02-24 17:34:11 UTC (rev 4243) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-02-24 17:34:41 UTC (rev 4244) @@ -30,7 +30,10 @@ import org.openrdf.query.algebra.Compare; import org.openrdf.query.algebra.Filter; import org.openrdf.query.algebra.Group; +import org.openrdf.query.algebra.IsBNode; import org.openrdf.query.algebra.IsLiteral; +import org.openrdf.query.algebra.IsResource; +import org.openrdf.query.algebra.IsURI; import org.openrdf.query.algebra.Join; import org.openrdf.query.algebra.LeftJoin; import org.openrdf.query.algebra.MathExpr; @@ -48,6 +51,7 @@ import org.openrdf.query.algebra.SameTerm; import org.openrdf.query.algebra.StatementPattern; import org.openrdf.query.algebra.StatementPattern.Scope; +import org.openrdf.query.algebra.Str; import org.openrdf.query.algebra.TupleExpr; import org.openrdf.query.algebra.UnaryTupleOperator; import org.openrdf.query.algebra.Union; @@ -72,6 +76,7 @@ import com.bigdata.bop.NV; import com.bigdata.bop.PipelineOp; import com.bigdata.bop.ap.Predicate; +import com.bigdata.bop.constraint.INBinarySearch; import com.bigdata.bop.engine.IRunningQuery; import com.bigdata.bop.engine.QueryEngine; import com.bigdata.bop.solutions.ISortOrder; @@ -82,14 +87,16 @@ import com.bigdata.rdf.internal.XSDBooleanIV; import com.bigdata.rdf.internal.constraints.AndBOp; import com.bigdata.rdf.internal.constraints.CompareBOp; +import com.bigdata.rdf.internal.constraints.Constraint; import com.bigdata.rdf.internal.constraints.EBVBOp; +import com.bigdata.rdf.internal.constraints.IsBNodeBOp; import com.bigdata.rdf.internal.constraints.IsBoundBOp; import com.bigdata.rdf.internal.constraints.IsLiteralBOp; +import com.bigdata.rdf.internal.constraints.IsURIBOp; import com.bigdata.rdf.internal.constraints.MathBOp; import com.bigdata.rdf.internal.constraints.NotBOp; import com.bigdata.rdf.internal.constraints.OrBOp; import com.bigdata.rdf.internal.constraints.SameTermBOp; -import com.bigdata.rdf.internal.constraints.Constraint; import com.bigdata.rdf.lexicon.LexiconRelation; import com.bigdata.rdf.model.BigdataValue; import com.bigdata.rdf.sail.BigdataSail.Options; @@ -2066,6 +2073,12 @@ return toVE((Bound) ve); } else if (ve instanceof IsLiteral) { return toVE((IsLiteral) ve); + } else if (ve instanceof IsBNode) { + return toVE((IsBNode) ve); + } else if (ve instanceof IsResource) { + return toVE((IsResource) ve); + } else if (ve instanceof IsURI) { + return toVE((IsURI) ve); } throw new UnsupportedOperatorException(ve); @@ -2152,6 +2165,22 @@ return new IsLiteralBOp(var); } + private IValueExpression<IV> toVE(final IsBNode isBNode) { + final IVariable<IV> var = (IVariable<IV>) toVE(isBNode.getArg()); + return new IsBNodeBOp(var); + } + + private IValueExpression<IV> toVE(final IsResource isResource) { + final IVariable<IV> var = (IVariable<IV>) toVE(isResource.getArg()); + // isResource == isURI || isBNode == !isLiteral + return new NotBOp(new IsLiteralBOp(var)); + } + + private IValueExpression<IV> toVE(final IsURI isURI) { + final IVariable<IV> var = (IVariable<IV>) toVE(isURI.getArg()); + return new IsURIBOp(var); + } + /** * Generate a bigdata term from a Sesame term. * <p> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-03-07 19:59:02
|
Revision: 4277 http://bigdata.svn.sourceforge.net/bigdata/?rev=4277&view=rev Author: thompsonbry Date: 2011-03-07 19:58:52 +0000 (Mon, 07 Mar 2011) Log Message: ----------- Bug fix for https://sourceforge.net/apps/trac/bigdata/ticket/252 (inline dateTime causes error with full tx). I modified the LexiconRelation to always use an unisolated index for mutable views. The lexicon relation uses an eventually consistent strategy to write on the TERM2ID and ID2TERM indices and does not rely on transactional isolation. AbstractRelation#getIndex/3 is used to automatically wrap the unisolated views of those indices with an UnisolatedReadWriteIndex class, which uses a RecurrentReadWriteLock to control access to the unisolated indices. I have added com.bigdata.rdf.sail.TextTxCreate. It verifies that a Sail can be created which supports transactional isolation both with and without inline date times. I have removed the sample code for CreateSailUsingInlineDateTimes in favor of this unit test. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IDatatypeURIResolver.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTxCreate.java Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.properties Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IDatatypeURIResolver.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IDatatypeURIResolver.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IDatatypeURIResolver.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -28,6 +28,19 @@ import com.bigdata.rdf.lexicon.LexiconRelation; import com.bigdata.rdf.model.BigdataURI; +/** + * Specialized interface for resolving (and creating if necessary) datatype + * URIs. This interface requires access to a mutable view of the database since + * unknown URIs will be registered. + * + * TODO This is not going to be efficient in scale-out since it does not batch + * the resolution of the URIs. It will be more efficient to pass in the set of + * URIs of interest and have them all be registered at once. + * {@link LexiconRelation#addTerms(com.bigdata.rdf.model.BigdataValue[], int, boolean)} + * already provides for this kind of batched resolution. + * + * @author mrpersonick + */ public interface IDatatypeURIResolver { /** Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -699,9 +699,10 @@ return textIndex; } - + /** - * Overridden to return the hard reference for the index. + * Overridden to use {@link #getTerm2IdIndex()} and + * {@link #getId2TermIndex()} as appropriate. */ @Override public IIndex getIndex(final IKeyOrder<? extends BigdataValue> keyOrder) { @@ -730,7 +731,24 @@ if (term2id == null) { - term2id = super.getIndex(LexiconKeyOrder.TERM2ID); + final long timestamp = getTimestamp(); + + if (TimestampUtility.isReadWriteTx(timestamp)) { + /* + * We always use the unisolated view of the lexicon + * indices for mutation and the lexicon indices do NOT + * set the [isolatable] flag even if the kb supports + * full tx isolation. This is because we use an + * eventually consistent strategy to write on the + * lexicon indices. + */ + term2id = AbstractRelation + .getIndex(getIndexManager(), + getFQN(LexiconKeyOrder.TERM2ID), + ITx.UNISOLATED); + } else { + term2id = super.getIndex(LexiconKeyOrder.TERM2ID); + } if (term2id == null) throw new IllegalStateException(); @@ -753,7 +771,24 @@ if (id2term == null) { - id2term = super.getIndex(LexiconKeyOrder.ID2TERM); + final long timestamp = getTimestamp(); + + if (TimestampUtility.isReadWriteTx(timestamp)) { + /* + * We always use the unisolated view of the lexicon + * indices for mutation and the lexicon indices do NOT + * set the [isolatable] flag even if the kb supports + * full tx isolation. This is because we use an + * eventually consistent strategy to write on the + * lexicon indices. + */ + id2term = AbstractRelation + .getIndex(getIndexManager(), + getFQN(LexiconKeyOrder.ID2TERM), + ITx.UNISOLATED); + } else { + id2term = super.getIndex(LexiconKeyOrder.ID2TERM); + } if (id2term == null) throw new IllegalStateException(); @@ -1075,7 +1110,7 @@ if (buri.getIV() == null) { // will set tid on buri as a side effect - TermId tid = getTermId(buri); + final TermId<?> tid = getTermId(buri); if (tid == null) { Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/AbstractTripleStore.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -922,7 +922,7 @@ String INLINE_DATE_TIMES = (AbstractTripleStore.class.getName() + ".inlineDateTimes").intern(); - String DEFAULT_INLINE_DATE_TIMES = "false"; + String DEFAULT_INLINE_DATE_TIMES = "true"; /** * The default timezone to be used to a) encode inline xsd:datetime Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -1,75 +0,0 @@ -/** - -Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. - -Contact: - SYSTAP, LLC - 4501 Tower Road - Greensboro, NC 27410 - lic...@bi... - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; version 2 of the License. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA -*/ -/* - * Created on Feb 17, 2011 - */ - -package com.bigdata.samples; - -import java.util.Properties; - -import com.bigdata.rdf.sail.BigdataSail; - -/** - * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ - */ -public class CreateSailUsingInlineDateTimes extends SampleCode { - - static public void main(String[] args) { - - try { - - CreateSailUsingInlineDateTimes f = new CreateSailUsingInlineDateTimes(); - - final String resource = "CreateSailUsingInlineDateTimes.properties"; - - final Properties properties = f.loadProperties(resource); - - System.out.println("Read properties from resource: " + resource); - properties.list(System.out); - - final BigdataSail sail = new BigdataSail(properties); - - sail.initialize(); - - try { - - System.out.println("Sail is initialized."); - - } finally { - - sail.shutDown(); - - } - - } catch (Throwable t) { - - t.printStackTrace(System.err); - - } - - } - -} Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.properties 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/CreateSailUsingInlineDateTimes.properties 2011-03-07 19:58:52 UTC (rev 4277) @@ -1,18 +0,0 @@ -com.bigdata.journal.AbstractJournal.createTempFile=true -com.bigdata.journal.AbstractJournal.deleteOnClose=true -com.bigdata.journal.AbstractJournal.deleteOnExit=true -# to not fail testcases that count... (NOT recommended). -com.bigdata.rdf.sail.exactSize=true -# This is full tx support, which does not have nearly the throughput of an unisolated writer combined with concurrent readers. -com.bigdata.rdf.sail.isolatableIndices=true -com.bigdata.rdf.sail.truthMaintenance=false -# This option will be going away once we finish the query engine refactor. -com.bigdata.rdf.sail.allowSesameQueryEvaluation=true -# Auto-commit is NOT recommended. -com.bigdata.rdf.sail.allowAutoCommit=true -com.bigdata.rdf.store.AbstractTripleStore.axiomsClass=com.bigdata.rdf.axioms.NoAxioms -com.bigdata.rdf.store.AbstractTripleStore.quads=true -com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=false -com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass=com.bigdata.rdf.vocab.NoVocabulary -com.bigdata.rdf.store.AbstractTripleStore.justify=false -com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=true Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -105,6 +105,8 @@ suite.addTestSuite(TestInlineValues.class); + suite.addTestSuite(TestTxCreate.class); + // The Sesame TCK, including the SPARQL test suite. { Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -84,7 +84,9 @@ suite.addTestSuite(TestDescribe.class); suite.addTestSuite(TestInlineValues.class); - + + suite.addTestSuite(TestTxCreate.class); + return suite; } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-03-07 15:04:34 UTC (rev 4276) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -81,6 +81,8 @@ suite.addTestSuite(TestInlineValues.class); + suite.addTestSuite(TestTxCreate.class); + return suite; } Added: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTxCreate.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTxCreate.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTxCreate.java 2011-03-07 19:58:52 UTC (rev 4277) @@ -0,0 +1,147 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Mar 7, 2011 + */ + +package com.bigdata.rdf.sail; + +import java.util.Properties; + +import org.openrdf.sail.SailException; + +import com.bigdata.rdf.axioms.NoAxioms; +import com.bigdata.rdf.internal.LexiconConfiguration; +import com.bigdata.rdf.store.AbstractTripleStore; +import com.bigdata.rdf.vocab.NoVocabulary; + +/** + * Unit test for the creation of a Sail with isolatable indices. This unit test + * was developed in response to <a + * href="https://sourceforge.net/apps/trac/bigdata/ticket/252">issue #252</a>, + * which reported a problem when creating a Sail which supports fully isolated + * indices and also uses inline date times. The problem goes back to how the + * {@link LexiconConfiguration} gains access to the ID2TERM index when it is + * initialized. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ +public class TestTxCreate extends ProxyBigdataSailTestCase { + + /** + * + */ + public TestTxCreate() { + } + + /** + * @param name + */ + public TestTxCreate(String name) { + super(name); + } + + /** + * Version of the test with data time inlining disabled. + * + * @throws SailException + */ + public void test_tx_create() throws SailException { + + final Properties properties = getProperties(); + + // truth maintenance is not compatible with full transactions. + properties.setProperty(BigdataSail.Options.TRUTH_MAINTENANCE, "false"); + + properties.setProperty(AbstractTripleStore.Options.AXIOMS_CLASS, + NoAxioms.class.getName()); + + properties.setProperty(AbstractTripleStore.Options.VOCABULARY_CLASS, + NoVocabulary.class.getName()); + + properties.setProperty(BigdataSail.Options.ISOLATABLE_INDICES, "true"); + + properties.setProperty(AbstractTripleStore.Options.JUSTIFY, "false"); + + properties.setProperty(AbstractTripleStore.Options.INLINE_DATE_TIMES, + "false"); + + final BigdataSail sail = new BigdataSail(properties); + + try { + + sail.initialize(); + + log.info("Sail is initialized."); + + } finally { + + sail.__tearDownUnitTest(); + + } + + } + + /** + * Version of the test with data time inlining enabled. + * @throws SailException + */ + public void test_tx_create_withInlineDateTimes() throws SailException { + + final Properties properties = getProperties(); + + // truth maintenance is not compatible with full transactions. + properties.setProperty(BigdataSail.Options.TRUTH_MAINTENANCE, "false"); + + properties.setProperty(AbstractTripleStore.Options.AXIOMS_CLASS, + NoAxioms.class.getName()); + + properties.setProperty(AbstractTripleStore.Options.VOCABULARY_CLASS, + NoVocabulary.class.getName()); + + properties.setProperty(AbstractTripleStore.Options.JUSTIFY, "false"); + + properties.setProperty(BigdataSail.Options.ISOLATABLE_INDICES, "true"); + + properties.setProperty(AbstractTripleStore.Options.INLINE_DATE_TIMES, + "true"); + + final BigdataSail sail = new BigdataSail(properties); + + try { + + sail.initialize(); + + log.info("Sail is initialized."); + + } finally { + + sail.__tearDownUnitTest(); + + } + + } + +} Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTxCreate.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-03-31 19:02:18
|
Revision: 4360 http://bigdata.svn.sourceforge.net/bigdata/?rev=4360&view=rev Author: thompsonbry Date: 2011-03-31 19:02:10 +0000 (Thu, 31 Mar 2011) Log Message: ----------- Added the jetty dependencies into build.xml in support of CI builds. Removed three unused dependencies (commons-codec, commons-httpclient, commons-logging). Remove the resources for the old REST remoting API. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/build.xml Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/commons-codec.jar branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/commons-httpclient.jar branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/commons-logging.jar branches/QUADS_QUERY_BRANCH/bigdata-sails/src/resources/remoting/ Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/commons-codec.jar =================================================================== (Binary files differ) Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/commons-httpclient.jar =================================================================== (Binary files differ) Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/commons-logging.jar =================================================================== (Binary files differ) Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-03-31 18:15:33 UTC (rev 4359) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-03-31 19:02:10 UTC (rev 4360) @@ -700,6 +700,7 @@ <property name="bigdata-jini.lib" location="${bigdata.dir}/bigdata-jini/lib/jini/lib" /> <property name="bigdata-rdf.lib" location="${bigdata.dir}/bigdata-rdf/lib" /> <property name="bigdata-sails.lib" location="${bigdata.dir}/bigdata-sails/lib" /> + <property name="bigdata-jetty.lib" location="${bigdata.dir}/bigdata-sails/lib/jetty" /> <property name="bigdata-zookeeper.lib" location="${bigdata.dir}/bigdata-jini/lib/apache" /> <!-- Utility libraries --> @@ -740,6 +741,20 @@ <copy file="${bigdata-rdf.lib}/slf4j-log4j12-1.4.3.jar" tofile="${dist.lib}/slf4j-log4j.jar" /> + <!-- jetty library --> + <copy file="${bigdata-jetty.lib}/jetty-continuation-7.2.2.v20101205.jar" + tofile="${dist.lib}/jetty-continuation.jar" /> + <copy file="${bigdata-jetty.lib}/jetty-http-7.2.2.v20101205.jar" + tofile="${dist.lib}/jetty-http.jar" /> + <copy file="${bigdata-jetty.lib}/jetty-io-7.2.2.v20101205.jar" + tofile="${dist.lib}/jetty-io.jar" /> + <copy file="${bigdata-jetty.lib}/jetty-server-7.2.2.v20101205.jar" + tofile="${dist.lib}/jetty-server.jar" /> + <copy file="${bigdata-jetty.lib}/jetty-util-7.2.2.v20101205.jar" + tofile="${dist.lib}/jetty-util.jar" /> + <copy file="${bigdata-jetty.lib}/servlet-api-2.5.jar" + tofile="${dist.lib}/servlet-api.jar" /> + <!-- NxParser (RDF NQuads support) --> <copy file="${bigdata-rdf.lib}/nxparser-6-22-2010.jar" tofile="${dist.lib}/nxparser.jar" /> @@ -1259,7 +1274,7 @@ <mkdir dir="${bigdata-test.lib}" /> <property name="bigdata-test.jar" location="${bigdata-test.lib}/bigdata-test.jar" /> - <property name="javac.test.classpath" value="${classes.dir}${path.separator}${junit.jar}${path.separator}${cweb-junit-ext.jar}${path.separator}${sesame-sparql-test.jar}${path.separator}${sesame-store-test.jar}${path.separator}${dist.lib}/classserver.jar${path.separator}${dist.lib}/cweb-commons.jar${path.separator}${dist.lib}/cweb-extser.jar${path.separator}${dist.lib}/highscalelib.jar${path.separator}${dist.lib}/dsiutils.jar${path.separator}${dist.lib}/lgplutils.jar${path.separator}${dist.lib}/fastutil.jar${path.separator}${dist.lib}/icu4j.jar${path.separator}${dist.lib}/iris.jar${path.separator}${dist.lib}/jgrapht.jar${path.separator}${dist.lib}/log4j.jar${path.separator}${dist.lib}/openrdf-sesame.jar${path.separator}${dist.lib}/slf4j.jar${path.separator}${dist.lib}/jsk-lib.jar${path.separator}${dist.lib}/jsk-platform.jar${path.separator}${dist.lib}/nxparser.jar${path.separator}${dist.lib}/zookeeper.jar" /> + <property name="javac.test.classpath" value="${classes.dir}${path.separator}${junit.jar}${path.separator}${cweb-junit-ext.jar}${path.separator}${sesame-sparql-test.jar}${path.separator}${sesame-store-test.jar}${path.separator}${dist.lib}/classserver.jar${path.separator}${dist.lib}/cweb-commons.jar${path.separator}${dist.lib}/cweb-extser.jar${path.separator}${dist.lib}/highscalelib.jar${path.separator}${dist.lib}/dsiutils.jar${path.separator}${dist.lib}/lgplutils.jar${path.separator}${dist.lib}/fastutil.jar${path.separator}${dist.lib}/icu4j.jar${path.separator}${dist.lib}/iris.jar${path.separator}${dist.lib}/jgrapht.jar${path.separator}${dist.lib}/log4j.jar${path.separator}${dist.lib}/openrdf-sesame.jar${path.separator}${dist.lib}/slf4j.jar${path.separator}${dist.lib}/jsk-lib.jar${path.separator}${dist.lib}/jsk-platform.jar${path.separator}${dist.lib}/nxparser.jar${path.separator}${dist.lib}/zookeeper.jar${path.separator}${dist.lib}/jetty-continuation.jar${path.separator}${dist.lib}/jetty-http.jar${path.separator}${dist.lib}/jetty-io.jar${path.separator}${dist.lib}/jetty-server.jar${path.separator}${dist.lib}/jetty-util.jar${path.separator}${dist.lib}/servlet-api.jar" /> <echo>javac </echo> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-04-11 13:35:58
|
Revision: 4388 http://bigdata.svn.sourceforge.net/bigdata/?rev=4388&view=rev Author: thompsonbry Date: 2011-04-11 13:35:47 +0000 (Mon, 11 Apr 2011) Log Message: ----------- Merged LARGE_LITERALS_REFACTOR to QUADS_QUERY_BRANCH [r4174:r4387]. SVN reported a 23 file conflicts and 115 tree conflicts. None of the file conflicts was a real conflict. In all cases the version from the LARGE_LITERAL_REFACTOR was accepted. This change set includes: - HashCollisionUtility, which is used to test various designs for large literal handling. - New performance counters in several classes (Journal, BTree, etc). - The MemStore, which is based on the MemoryManager. - Various fixes to the MemoryManager, but there are still issues remaining. See the notes at the bottom of the MemoryManager class for details. - Support for ?\226?\128?\156raw records?\226?\128?\157 in the B+Tree. This is a configurable option which permits you to specify the maximum byte length of the value associated with a tuple in the B+Tree before it is written as raw record on the backing file rather than inlined into the leaf. The following tests were run before committing the merge: - TestBigdataSailWithQuads ?\226?\128?\147 11 expected test failures. - TestMemoryManager - ok - TestLocalTripleStore - ok - TestRWJournal o resolved test_restartSafe_writeOne_noCommit where the exception thrown had been changed and was also being wrapped as a RuntimeException, but the test had not been updated. This test failure was already present in the QUADS branch. o Removed explicit case to ResourceManager in AbstractTask which was causing several RWStore tests to fail. This bug was introduced within the LARGE_LITERALS_REFACTOR branch when I added getIndexCounters() to the IResourceManager API. o Bug fix to com.bigdata.journal.TestRootBlockView.test_ctor_correctRejection. This has been failing for some time. The problem is that the commit record addr and commit record index addr MUST both be NULL (0L) if the commit counter is zero. This was a unit test problem. I?\226?\128?\153ve added several new cases to the unit test and fixed the one that was failing. Also changed all references to System.err into conditional logging @ INFO. - TestWORMJournal ?\226?\128?\147 Ok. - TestMemStore ?\226?\128?\147 Ok. - TestAll (BTree) ?\226?\128?\147 Ok (except for 4 known failures for the index segment which have been around since the merge from the JOURNAL_HA_BRANCH). Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTreeTupleCursor.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractTuple.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeCounters.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeUtilizationReport.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/DefaultTupleSerializer.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IBTreeUtilizationReport.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegment.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentCheckpoint.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Leaf.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/LeafTupleIterator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/MutableLeafData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Node.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/ResultSet.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/AbstractReadOnlyNodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/DefaultLeafCoder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/IAbstractNodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/ILeafData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/filter/TupleFilter.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/filter/TupleRemover.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/keys/KVO.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/MutableBucketData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/data/IBucketData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/IResourceManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/IndexManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/AllocationContext.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/IMemoryManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/MemoryManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/SectorAllocator.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/concurrent/LatchedExecutor.java branches/QUADS_QUERY_BRANCH/bigdata/src/resources/logging/log4j.properties branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/AbstractBTreeTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestAll_BTreeBasics.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestInsertLookupRemoveKeysInRootLeaf.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestRemoveAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestSplitJoinRootLeaf.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestUtilMethods.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/AbstractLeafDataRecordTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/AbstractNodeOrLeafDataRecordTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/MockLeafData.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/htree/data/AbstractHashBucketDataRecordTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/htree/data/MockBucketData.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/htree/data/TestBucketDataRecord_Simple_Simple.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/io/TestCase3.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractBufferStrategyTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/AbstractRestartSafeTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/journal/TestRootBlockView.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/rwstore/sector/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/rwstore/sector/TestMemoryManager.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/AbstractBNodeIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/AbstractIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/DateTimeExtension.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/ExtensionIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/ILexiconConfiguration.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/IV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/LexiconConfiguration.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/NullIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/StrIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/TermId.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/UUIDLiteralIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSD.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDBooleanIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDByteIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDDecimalIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDDoubleIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDFloatIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDIntIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDIntegerIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDLongIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/XSDShortIV.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/StrBOp.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataBNodeImpl.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataLiteralImpl.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataURI.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataURIImpl.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValue.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactory.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueImpl.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestInlining.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/model/TestAll.java Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/TransientResourceMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/MemStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/MemoryManagerOutOfMemory.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestRawRecords.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/rwstore/sector/TestMemStore.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/NotMaterializedException.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/HashCollisionUtility.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/ParserSpeedTest.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestIVCache.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/model/TestBigdataValueSerialization.java Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/BlobOverflowHandler.java Property Changed: ---------------- branches/QUADS_QUERY_BRANCH/ branches/QUADS_QUERY_BRANCH/bigdata/lib/jetty/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/aggregate/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/fast/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/rto/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/util/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/raba/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/jsr166/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/joinGraph/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/joinGraph/fast/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/joinGraph/rto/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/util/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/jsr166/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/util/httpd/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/com/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/com/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/src/test/com/bigdata/journal/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/QUADS_QUERY_BRANCH/bigdata-perf/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/lib/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/generator/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/model/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/qualification/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/serializer/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/testdriver/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/tools/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/java/benchmark/vocabulary/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/bsbm-data/queries/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/resources/logging/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/src/test/benchmark/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-perf/btc/ branches/QUADS_QUERY_BRANCH/bigdata-perf/btc/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/LEGAL/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/lib/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/uba/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/api/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/java/edu/lehigh/swat/bench/ubt/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/config/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/scripts/ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/bop/rdf/aggregate/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/error/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/constraints/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/relation/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/relation/rule/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/bigdata/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/bigdata/rdf/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/aggregate/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/constraints/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/relation/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/relation/rule/ branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/jetty/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/changesets/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/bench/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/ branches/QUADS_QUERY_BRANCH/dsi-utils/ branches/QUADS_QUERY_BRANCH/dsi-utils/LEGAL/ branches/QUADS_QUERY_BRANCH/dsi-utils/lib/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/compression/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/io/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/dsi/util/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/io/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/util/ branches/QUADS_QUERY_BRANCH/lgpl-utils/src/java/it/unimi/dsi/fastutil/bytes/custom/ branches/QUADS_QUERY_BRANCH/lgpl-utils/src/test/it/unimi/dsi/fastutil/bytes/custom/ branches/QUADS_QUERY_BRANCH/osgi/ branches/QUADS_QUERY_BRANCH/src/resources/bin/config/ Property changes on: branches/QUADS_QUERY_BRANCH ___________________________________________________________________ Modified: svn:mergeinfo - /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/JOURNAL_HA_BRANCH:2596-4066 /branches/LEXICON_REFACTOR_BRANCH:2633-3304 /branches/bugfix-btm:2594-3237 /branches/dev-btm:2574-2730 /branches/fko:3150-3194 /trunk:3392-3437,3656-4061 + /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/JOURNAL_HA_BRANCH:2596-4066 /branches/LARGE_LITERALS_REFACTOR:4175-4387 /branches/LEXICON_REFACTOR_BRANCH:2633-3304 /branches/bugfix-btm:2594-3237 /branches/dev-btm:2574-2730 /branches/fko:3150-3194 /trunk:3392-3437,3656-4061 Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/lib/jetty ___________________________________________________________________ Added: svn:mergeinfo + Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/aggregate ___________________________________________________________________ Added: svn:mergeinfo + Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph ___________________________________________________________________ Added: svn:mergeinfo + Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/fast ___________________________________________________________________ Added: svn:mergeinfo + Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/rto ___________________________________________________________________ Added: svn:mergeinfo + Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/util ___________________________________________________________________ Added: svn:mergeinfo + Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-04-10 13:17:59 UTC (rev 4387) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-04-11 13:35:47 UTC (rev 4388) @@ -28,14 +28,12 @@ package com.bigdata.btree; import java.io.PrintStream; -import java.io.Serializable; import java.lang.ref.Reference; import java.lang.ref.SoftReference; import java.lang.ref.WeakReference; import java.nio.ByteBuffer; import java.util.HashMap; import java.util.Iterator; -import java.util.UUID; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.Executor; @@ -71,9 +69,9 @@ import com.bigdata.cache.IGlobalLRU.ILRUCache; import com.bigdata.counters.CounterSet; import com.bigdata.counters.ICounterSet; -import com.bigdata.counters.Instrument; import com.bigdata.counters.OneShotInstrument; import com.bigdata.io.AbstractFixedByteArrayBuffer; +import com.bigdata.io.ByteArrayBuffer; import com.bigdata.io.DirectBufferPool; import com.bigdata.io.compression.IRecordCompressorFactory; import com.bigdata.journal.AbstractTask; @@ -85,6 +83,7 @@ import com.bigdata.mdi.IResourceMetadata; import com.bigdata.mdi.LocalPartitionMetadata; import com.bigdata.rawstore.IRawStore; +import com.bigdata.rawstore.TransientResourceMetadata; import com.bigdata.resources.IndexManager; import com.bigdata.resources.OverflowManager; import com.bigdata.service.DataService; @@ -703,153 +702,168 @@ // */ // final protected RingBuffer<PO> readRetentionQueue; - /** - * Return some "statistics" about the btree including both the static - * {@link CounterSet} and the {@link BTreeCounters}s. - * <p> - * Note: Since this DOES NOT include the {@link #getDynamicCounterSet()}, - * holding a reference to the returned {@link ICounterSet} WILL NOT cause - * the {@link AbstractBTree} to remain strongly reachable. - * - * @see #getStaticCounterSet() - * @see #getDynamicCounterSet() - * @see BTreeCounters#getCounters() - */ + /** + * Interface declaring namespaces for performance counters collected for a + * B+Tree. + * + * @see AbstractBTree#getCounters() + */ + public static interface IBTreeCounters { + + /** Counters for the {@link IBTreeStatistics} interface. */ + String Statistics = "Statistics"; + /** Counters for the {@link AbstractBTree#writeRetentionQueue}. */ + String WriteRetentionQueue = "WriteRetentionQueue"; + /** Counters for key search performance. */ + String KeySearch = "KeySearch"; + /** Counters for {@link IRangeQuery}. */ + String RangeQuery = "RangeQuery"; + /** Counters for the {@link ILinearList} API. */ + String LinearList = "LinearList"; + /** Counters for structural modifications. */ + String Structure = "Structure"; + /** Counters for tuple-level operations. */ + String Tuples = "Tuples"; + /** Counters for IO. */ + String IO = "IO"; + + } + + /** + * Return some "statistics" about the btree including both the static + * {@link CounterSet} and the {@link BTreeCounters}s. + * <p> + * Note: counters reporting directly on the {@link AbstractBTree} use a + * snapshot mechanism which prevents a hard reference to the + * {@link AbstractBTree} from being attached to the return + * {@link CounterSet} object. One consequence is that these counters will + * not update until the next time you invoke {@link #getCounters()}. + * <p> + * Note: In order to snapshot the counters use {@link OneShotInstrument} to + * prevent the inclusion of an inner class with a reference to the outer + * {@link AbstractBTree} instance. + * + * @see BTreeCounters#getCounters() + * + * @todo use same instance of BTreeCounters for all BTree instances in + * standalone! + * + * @todo estimate heap requirements for nodes and leaves based on their + * state (keys, values, and other arrays). report estimated heap + * consumption here. + */ public ICounterSet getCounters() { - final CounterSet counterSet = getStaticCounterSet(); + final CounterSet counterSet = new CounterSet(); + { + + counterSet.addCounter("index UUID", new OneShotInstrument<String>( + getIndexMetadata().getIndexUUID().toString())); - counterSet.attach(btreeCounters.getCounters()); + counterSet.addCounter("class", new OneShotInstrument<String>( + getClass().getName())); - return counterSet; + } - } + /* + * Note: These statistics are reported using a snapshot mechanism which + * prevents a hard reference to the AbstractBTree from being attached to + * the CounterSet object! + */ + { -// synchronized public ICounterSet getCounters() { -// -// if (counterSet == null) { -// -// counterSet = getStaticCounterSet(); -// -// counterSet.attach(btreeCounters.getCounters()); -// -// } -// -// return counterSet; -// -// } -// private CounterSet counterSet; - - /** - * Return a new {@link CounterSet} containing dynamic counters - <strong>DO - * NOT hold onto the returned {@link CounterSet} as it contains implicit - * hard references to the {@link AbstractBTree} and will prevent the - * {@link AbstractBTree} from being finalized! </strong> - * - * @todo factor counter names into interface. - */ - public CounterSet getDynamicCounterSet() { + final CounterSet tmp = counterSet + .makePath(IBTreeCounters.WriteRetentionQueue); - final CounterSet counterSet = new CounterSet(); + tmp.addCounter("Capacity", new OneShotInstrument<Integer>( + writeRetentionQueue.capacity())); - counterSet.addCounter("Write Retention Queue Distinct Count", - new Instrument<Long>() { - public void sample() { - setValue((long) ndistinctOnWriteRetentionQueue); - } - }); + tmp.addCounter("Size", new OneShotInstrument<Integer>( + writeRetentionQueue.size())); - counterSet.addCounter("Write Retention Queue Size", - new Instrument<Integer>() { - protected void sample() { - setValue(writeRetentionQueue.size()); - } - }); + tmp.addCounter("Distinct", new OneShotInstrument<Integer>( + ndistinctOnWriteRetentionQueue)); -// counterSet.addCounter("Read Retention Queue Size", -// new Instrument<Integer>() { -// protected void sample() { -// setValue(readRetentionQueue == null ? 0 -// : readRetentionQueue.size()); -// } -// }); - - /* - * @todo report time open? - * - * @todo report #of times open? - * - * @todo estimate heap requirements for nodes and leaves based on their - * state (keys, values, and other arrays). report estimated heap - * consumption here. - */ - - // the % utilization in [0:1] for the whole tree (nodes + leaves). - counterSet.addCounter("% utilization", new Instrument<Double>(){ - protected void sample() { - setValue(getUtilization().getTotalUtilization() / 100d); - } - }); + } - counterSet.addCounter("height", new Instrument<Integer>() { - protected void sample() { - setValue(getHeight()); - } - }); + /* + * Note: These statistics are reported using a snapshot mechanism which + * prevents a hard reference to the AbstractBTree from being attached to + * the CounterSet object! + */ + { - counterSet.addCounter("#nnodes", new Instrument<Integer>() { - protected void sample() { - setValue(getNodeCount()); - } - }); + final CounterSet tmp = counterSet + .makePath(IBTreeCounters.Statistics); - counterSet.addCounter("#nleaves", new Instrument<Integer>() { - protected void sample() { - setValue(getLeafCount()); - } - }); + tmp.addCounter("branchingFactor", new OneShotInstrument<Integer>( + branchingFactor)); - counterSet.addCounter("#entries", new Instrument<Integer>() { - protected void sample() { - setValue(getEntryCount()); - } - }); + tmp.addCounter("height", + new OneShotInstrument<Integer>(getHeight())); - return counterSet; - - } - - /** - * Returns a new {@link CounterSet} containing <strong>static</strong> - * counters. These counters are all {@link OneShotInstrument}s and DO NOT - * contain implicit references to the {@link AbstractBTree}. They may be - * safely held and will not cause the {@link AbstractBTree} to remain - * strongly reachable. - */ - public CounterSet getStaticCounterSet() { + tmp.addCounter("nodeCount", new OneShotInstrument<Integer>( + getNodeCount())); - final CounterSet counterSet = new CounterSet(); + tmp.addCounter("leafCount", new OneShotInstrument<Integer>( + getLeafCount())); - counterSet.addCounter("index UUID", new OneShotInstrument<String>( - getIndexMetadata().getIndexUUID().toString())); + tmp.addCounter("tupleCount", new OneShotInstrument<Integer>( + getEntryCount())); - counterSet.addCounter("branchingFactor", - new OneShotInstrument<Integer>(branchingFactor)); + /* + * Note: The utilization numbers reported here are a bit misleading. + * They only consider the #of index positions in the node or leaf + * which is full, but do not take into account the manner in which + * the persistence store allocates space to the node or leaf. For + * example, for the WORM we do perfect allocations but retain many + * versions. For the RWStore, we do best-fit allocations but recycle + * old versions. The space efficiency of the persistence store is + * typically the main driver, not the utilization rate as reported + * here. + */ + final IBTreeUtilizationReport r = getUtilization(); + + // % utilization in [0:100] for nodes + tmp.addCounter("%nodeUtilization", new OneShotInstrument<Integer>(r + .getNodeUtilization())); + + // % utilization in [0:100] for leaves + tmp.addCounter("%leafUtilization", new OneShotInstrument<Integer>(r + .getLeafUtilization())); - counterSet.addCounter("class", new OneShotInstrument<String>(getClass() - .getName())); + // % utilization in [0:100] for the whole tree (nodes + leaves). + tmp.addCounter("%totalUtilization", new OneShotInstrument<Integer>(r + .getTotalUtilization())); // / 100d - counterSet.addCounter("Write Retention Queue Capacity", - new OneShotInstrument<Integer>(writeRetentionQueue.capacity())); + /* + * Compute the average bytes per tuple. This requires access to the + * current entry count, so we have to do this as a OneShot counter + * to avoid dragging in the B+Tree reference. + */ -// if (readRetentionQueue != null) { -// -// counterSet.addCounter("Read Retention Queue Capacity", -// new OneShotInstrument<Integer>(readRetentionQueue -// .capacity())); -// -// } + final int entryCount = getEntryCount(); + final long bytes = btreeCounters.bytesOnStore_nodesAndLeaves.get() + + btreeCounters.bytesOnStore_rawRecords.get(); + + final int bytesPerTuple = (int) (entryCount == 0 ? 0d + : (bytes / entryCount)); + + tmp.addCounter("bytesPerTuple", new OneShotInstrument<Integer>( + bytesPerTuple)); + + } + + /* + * Attach detailed performance counters. + * + * Note: The BTreeCounters object does not have a reference to the + * AbstractBTree. Its counters will update "live" since we do not + * need to snapshot them. + */ + counterSet.attach(btreeCounters.getCounters()); + return counterSet; } @@ -1391,52 +1405,6 @@ } /** - * Static class since must be {@link Serializable}. - * - * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ - */ - static final class TransientResourceMetadata implements IResourceMetadata { - - private final UUID uuid; - - public TransientResourceMetadata(final UUID uuid) { - this.uuid = uuid; - } - - private static final long serialVersionUID = 1L; - - public boolean isJournal() { - return false; - } - - public boolean isIndexSegment() { - return false; - } - - public boolean equals(IResourceMetadata o) { - return false; - } - - public long getCreateTime() { - return 0L; - } - - public long getCommitTime() { - return 0L; - } - - public String getFile() { - return ""; - } - - public UUID getUUID() { - return uuid; - } - - } - - /** * Returns the metadata record for this btree. * <p> * Note: If the B+Tree is read-only then the metadata object will be cloned @@ -2383,7 +2351,7 @@ // conditional range check on the key. assert rangeCheck(key, false); - btreeCounters.nindexOf.incrementAndGet(); + btreeCounters.nindexOf.increment(); return getRootOrFinger(key).indexOf(key); @@ -2397,7 +2365,7 @@ if (index >= getEntryCount()) throw new IndexOutOfBoundsException(ERROR_TOO_LARGE); - btreeCounters.ngetKey.incrementAndGet(); + btreeCounters.ngetKey.increment(); return getRoot().keyAt(index); @@ -2424,7 +2392,7 @@ if (tuple == null || !tuple.getValuesRequested()) throw new IllegalArgumentException(); - btreeCounters.ngetKey.incrementAndGet(); + btreeCounters.ngetKey.increment(); getRoot().valueAt(index, tuple); @@ -2535,8 +2503,8 @@ } -// // only count the expensive ones. -// btreeCounters.nrangeCount.incrementAndGet(); + // only count the expensive ones. + btreeCounters.nrangeCount.increment(); final AbstractNode root = getRoot(); @@ -2742,7 +2710,7 @@ final IFilter filter// ) { -// btreeCounters.nrangeIterator.incrementAndGet(); + btreeCounters.nrangeIterator.increment(); /* * Does the iterator declare that it will not write back on the index? @@ -2871,7 +2839,7 @@ // Note: this iterator supports traversal with concurrent // modification. src = new MutableBTreeTupleCursor(((BTree) this), - new Tuple(this, flags), fromKey, toKey); + tuple, fromKey, toKey); } @@ -3699,10 +3667,14 @@ oldAddr = 0; } + final int nbytes = store.getByteCount(addr); + btreeCounters.writeNanos += System.nanoTime() - begin; - btreeCounters.bytesWritten += store.getByteCount(addr); + btreeCounters.bytesWritten += nbytes; + btreeCounters.bytesOnStore_nodesAndLeaves.addAndGet(nbytes); + } /* @@ -3718,7 +3690,7 @@ // remove from cache. storeCache.remove(oldAddr); } - store.delete(oldAddr); + deleteNodeOrLeaf(oldAddr);//, node instanceof Node); } node.setDirty(false); @@ -3838,11 +3810,11 @@ // + tmp.limit() + ", byteCount(addr)=" // + store.getByteCount(addr)+", addr="+store.toString(addr); - btreeCounters.readNanos.addAndGet( System.nanoTime() - begin ); + btreeCounters.readNanos.add( System.nanoTime() - begin ); final int bytesRead = tmp.limit(); - btreeCounters.bytesRead.addAndGet(bytesRead); + btreeCounters.bytesRead.add(bytesRead); } // Note: This is not necessary. The most likely place to be interrupted is in the IO on the raw store. It is not worth testing for an interrupt here since we are more liklely to notice one in the raw store and this method is low latency except for the potential IO read. @@ -3865,15 +3837,15 @@ // decode the record. data = nodeSer.decode(tmp); - btreeCounters.deserializeNanos.addAndGet(System.nanoTime() - begin); + btreeCounters.deserializeNanos.add(System.nanoTime() - begin); if (data.isLeaf()) { - btreeCounters.leavesRead.incrementAndGet(); + btreeCounters.leavesRead.increment(); } else { - btreeCounters.nodesRead.incrementAndGet(); + btreeCounters.nodesRead.increment(); } @@ -4038,5 +4010,166 @@ // } // // private long timestamp = System.nanoTime(); - + + /** + * Encode a raw record address into a byte[] suitable for storing in the + * value associated with a tuple and decoding using + * {@link #decodeRecordAddr(byte[])} + * + * @param recordAddrBuf + * The buffer that will be used to format the record address. + * @param addr + * The raw record address. + * + * @return A newly allocated byte[] which encodes that address. + */ + static public byte[] encodeRecordAddr(final ByteArrayBuffer recordAddrBuf, + final long addr) { + + recordAddrBuf.reset().putLong(addr); + + return recordAddrBuf.toByteArray(); + + } + + /** + * Decodes a signed long value as encoded by {@link #append(long)}. + * + * @param buf + * The buffer containing the encoded record address. + * + * @return The signed long value. + */ + static public long decodeRecordAddr(final byte[] buf) { + + long v = 0L; + + // big-endian. + v += (0xffL & buf[0]) << 56; + v += (0xffL & buf[1]) << 48; + v += (0xffL & buf[2]) << 40; + v += (0xffL & buf[3]) << 32; + v += (0xffL & buf[4]) << 24; + v += (0xffL & buf[5]) << 16; + v += (0xffL & buf[6]) << 8; + v += (0xffL & buf[7]) << 0; + + return v; + + } + + /** + * The maximum length of a <code>byte[]</code> value stored within a leaf + * for this {@link BTree}. This value only applies when raw record support + * has been enabled for the {@link BTree}. Values greater than this in + * length will be written as raw records on the backing persistence store. + * + * @return The maximum size of an inline <code>byte[]</code> value before it + * is promoted to a raw record. + */ + int getMaxRecLen() { + + return metadata.getMaxRecLen(); + + } + + /** + * Read the raw record from the backing store. + * <p> + * Note: This does not cache the record. In general, the file system cache + * should do a good job here. + * + * @param addr + * The record address. + * + * @return The data. + * + * @todo performance counters for raw records read. + * + * FIXME Add raw record compression. + */ + ByteBuffer readRawRecord(final long addr) { + + // read from the backing store. + final ByteBuffer b = getStore().read(addr); + + final int nbytes = getStore().getByteCount(addr); + + btreeCounters.rawRecordsRead.increment(); + btreeCounters.rawRecordsBytesRead.add(nbytes); + + return b; + + } + + /** + * Write a raw record on the backing store. + * + * @param b + * The data. + * + * @return The address at which the data was written. + * + * FIXME Add raw record compression. + */ + long writeRawRecord(final byte[] b) { + + if(isReadOnly()) + throw new IllegalStateException(ERROR_READ_ONLY); + + // write the value on the backing store. + final long addr = getStore().write(ByteBuffer.wrap(b)); + + final int nbytes = b.length; + + btreeCounters.rawRecordsWritten++; + btreeCounters.rawRecordsBytesWritten += nbytes; + btreeCounters.bytesOnStore_rawRecords.addAndGet(nbytes); + + return addr; + + } + + /** + * Delete a raw record from the backing store. + * + * @param addr + * The address of the record. + */ + void deleteRawRecord(final long addr) { + + if(isReadOnly()) + throw new IllegalStateException(ERROR_READ_ONLY); + + getStore().delete(addr); + + final int nbytes = getStore().getByteCount(addr); + + btreeCounters.bytesOnStore_rawRecords.addAndGet(-nbytes); + + } + + /** + * Delete a node or leaf from the backing store, updating various + * performance counters. + * + * @param addr + * The address of the node or leaf. + */ + void deleteNodeOrLeaf(final long addr) { + + if(addr == IRawStore.NULL) + throw new IllegalArgumentException(); + + if (isReadOnly()) + throw new IllegalStateException(ERROR_READ_ONLY); + + getStore().delete(addr); + + final int nbytes = getStore().getByteCount(addr); + + btreeCounters.bytesOnStore_nodesAndLeaves.addAndGet(-nbytes); + + } + } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTreeTupleCursor.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTreeTupleCursor.java 2011-04-10 13:17:59 UTC (rev 4387) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTreeTupleCursor.java 2011-04-11 13:35:47 UTC (rev 4388) @@ -1101,18 +1101,36 @@ if (DEBUG) log.debug("key="+BytesUtil.toString(key)); + + /* + * Remove the last visited tuple. + * + * Note: this will cause the tuples in the current leaf to have a gap + * (at least logically) and the remaining tuples will be moved down to + * cover that gap. This means that the [index] for the cursor position + * needs to be corrected. That is handled when the cursor position's + * listener notices the mutation event. + * + * Note: This uses the [tuple] on the iterator. This is an optimization + * for REMOVEALL. The [tuple] on the iterator will not have the KEYS or + * VALUES flags set when REMOVEALL is specified for the iterator. That + * means that we will not materialize those keys or values. This reduces + * heap churn and also prevents disk hits when the value is a raw + * record. + */ - /* - * Remove the last visited tuple. - * - * Note: this will cause the tuples in the current leaf to have a gap - * (at least logically) and the remaining tuples will be moved down to - * cover that gap. This means that the [index] for the cursor position - * needs to be corrected. That is handled when the cursor position's - * listener notices the mutation event. - */ + if (btree.getIndexMetadata().getDeleteMarkers()) { + + // set the delete marker. + btree.insert(key, null/* val */, true/* delete */, + btree.getRevisionTimestamp(), tuple); + + } else { - btree.remove(key); + // remove the tuple. + btree.remove(key, tuple); + + } } @@ -1487,7 +1505,8 @@ } - tuple.copy(index, leafCursor.leaf()); // Note: increments [tuple.nvisited] !!! + // Note: increments [tuple.nvisited] !!! + tuple.copy(index, leafCursor.leaf()); if(DEBUG) { Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java 2011-04-10 13:17:59 UTC (rev 4387) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java 2011-04-11 13:35:47 UTC (rev 4388) @@ -503,7 +503,7 @@ assert parent == null; // Update the root node on the btree. - if(INFO) + if(log.isInfoEnabled()) log.info("Copy-on-write : replaced root node on btree."); final boolean wasDirty = btree.root.dirty; Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractTuple.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractTuple.java 2011-04-10 13:17:59 UTC (rev 4387) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractTuple.java 2011-04-11 13:35:47 UTC (rev 4388) @@ -28,13 +28,14 @@ package com.bigdata.btree; +import java.nio.ByteBuffer; import java.util.Arrays; -import com.bigdata.btree.data.ILeafData; import com.bigdata.io.ByteArrayBuffer; import com.bigdata.io.DataInputBuffer; import com.bigdata.io.DataOutputBuffer; import com.bigdata.rawstore.IBlock; +import com.bigdata.rawstore.IRawStore; /** * Abstract base class with much of the functionality of {@link ITuple}. @@ -325,21 +326,22 @@ throw new UnsupportedOperationException(); } - - /** - * Copy data and metadata for the index entry from the {@link Leaf} into the - * {@link Tuple} and increment the counter of the #of visited entries. - * - * @param index - * The index entry. - * @param leaf - * The leaf. - * - * @todo The various copy methods should also set the [sourceIndex] property - * and {@link ITuple#getSourceIndex()} should be implemented by this - * class (or maybe add a setSourceIndex() to be more flexible). - */ - public void copy(final int index, final ILeafData leaf) { + + /** + * Copy data and metadata for the index entry from the {@link Leaf} into the + * {@link Tuple} and increment the counter of the #of visited entries. + * + * @param index + * The index entry. + * @param leaf + * The leaf. If a raw record must be materialized, it will be + * read from the backing store for the {@link Leaf}. + * + * @todo The various copy methods should also set the [sourceIndex] property + * and {@link ITuple#getSourceIndex()} should be implemented by this + * class (or maybe add a setSourceIndex() to be more flexible). + */ + public void copy(final int index, final Leaf leaf) { nvisited++; @@ -369,11 +371,36 @@ if (!versionDeleted) { isNull = leaf.getValues().isNull(index); - - if(!isNull) { - leaf.getValues().copy(index, vbuf); - + if (!isNull) { + + if (leaf.hasRawRecords()) { + + final long addr = leaf.getRawRecord(index); + + if (addr == IRawStore.NULL) { + + // copy out of the leaf. + leaf.getValues().copy(index, vbuf); + + } else { + + // materialize from the backing store. + final ByteBuffer tmp = leaf.btree + .readRawRecord(addr); + + // and copy into the value buffer. + vbuf.copy(tmp); + + } + + } else { + + // copy out of the leaf. + leaf.getValues().copy(index, vbuf); + + } + } } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2011-04-10 13:17:59 UTC (rev 4387) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2011-04-11 13:35:47 UTC (rev 4388) @@ -36,6 +36,7 @@ import com.bigdata.btree.Leaf.ILeafListener; import com.bigdata.btree.data.ILeafData; import com.bigdata.btree.data.INodeData; +import com.bigdata.io.ByteArrayBuffer; import com.bigdata.journal.AbstractJournal; import com.bigdata.journal.ICommitter; import com.bigdata.journal.IIndexManager; @@ -45,6 +46,7 @@ import com.bigdata.mdi.IResourceMetadata; import com.bigdata.mdi.JournalMetadata; import com.bigdata.mdi.LocalPartitionMetadata; +import com.bigdata.rawstore.Bytes; import com.bigdata.rawstore.IRawStore; /** @@ -269,7 +271,13 @@ * counter into its serialized record (without the partition identifier). */ protected AtomicLong counter; - + + /** + * A buffer used to encode a raw record address for a mutable {@link BTree} + * and otherwise <code>null</code>. + */ + private final ByteArrayBuffer recordAddrBuf; + // /** // * The last address from which the {@link IndexMetadata} record was read or // * on which it was written. @@ -359,9 +367,34 @@ * before we read the root node. */ // reopen(); + + /* + * Buffer used to encode addresses into the tuple value for a mutable + * B+Tree. + */ + recordAddrBuf = readOnly ? null + : new ByteArrayBuffer(Bytes.SIZEOF_LONG); } - + + /** + * Encode a raw record address into a byte[] suitable for storing in the + * value associated with a tuple and decoding using + * {@link AbstractBTree#decodeRecordAddr(byte[])}. This method is only + * supported for a mutable {@link BTree} instance. Per the contract of the + * mutable {@link BTree}, it is not thread-safe. + * + * @param addr + * The raw record address. + * + * @return A newly allocated byte[] which encodes that address. + */ + byte[] encodeRecordAddr(final long addr) { + + return AbstractBTree.encodeRecordAddr(recordAddrBuf, addr); + + } + /** * Sets the {@link #checkpoint} and initializes the mutable fields from the * checkpoint record. In order for this operation to be atomic, the caller @@ -1215,7 +1248,7 @@ if(childAddr != 0L) { // delete persistent child. - getStore().delete(childAddr); + deleteNodeOrLeaf(childAddr); } @@ -1223,28 +1256,39 @@ } - // delete root iff persistent. - if (getRoot().getIdentity() != 0L) { - - getStore().delete(getRoot().getIdentity()); - + final long raddr = getRoot().getIdentity(); + + if (raddr != IRawStore.NULL) { + + // delete root iff persistent. + deleteNodeOrLeaf(raddr); + } + // @todo update bytesOnStore to ZERO. replaceRootWithEmptyLeaf(); } else if (getIndexMetadata().getDeleteMarkers() - || getStore() instanceof RWStrategy) { + || getStore() instanceof RWStrategy// + || metadata.getRawRecords()// + ) { - /* - * Write deletion markers for each non-deleted entry. When the - * transaction commits, those delete markers will have to validate - * against the global state of the tree. If the transaction - * validates, then the merge down onto the global state will cause - * the corresponding entries to be removed from the global tree. - * - * Note: This operation can change the tree structure by triggering - * copy-on-write for immutable node or leaves. - */ + /* + * Visit each tuple. + * + * If deletion markers are enabled, then this will write deletion + * markers for each non-deleted entry. When the transaction commits, + * those delete markers will have to validate against the global + * state of the tree. If the transaction validates, then the merge + * down onto the global state will cause the corresponding entries + * to be removed from the global tree. + * + * If raw record support is enabled, then the raw records for the + * tuples visited will be deleted on the backing store. + * + * Note: This operation can change the tree structure by triggering + * copy-on-write for immutable node or leaves. + */ final ITupleIterator itr = rangeIterator(null, null, 0/* capacity */, REMOVEALL/* flags */, null/* filter */); Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeCounters.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeCounters.java 2011-04-10 13:17:59 UTC (rev 4387) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeCounters.java 2011-04-11 13:35:47 UTC (rev 4388) @@ -26,6 +26,8 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; +import com.bigdata.btree.AbstractBTree.IBTreeCounters; +import com.bigdata.counters.CAT; import com.bigdata.counters.CounterSet; import com.bigdata.counters.ICounterSet; import com.bigdata.counters.Instrument; @@ -128,12 +130,12 @@ ninserts.addAndGet(o.ninserts.get()); nremoves.addAndGet(o.nremoves.get()); // ILinearList - nindexOf.addAndGet(o.nindexOf.get()); // Note: also does key search. - ngetKey.addAndGet(o.ngetKey.get()); - ngetValue.addAndGet(o.ngetValue.get()); + nindexOf.add(o.nindexOf.get()); // Note: also does key search. + ngetKey.add(o.ngetKey.get()); + ngetValue.add(o.ngetValue.get()); // // IRangeQuery -// nrangeCount.addAndGet(o.nrangeCount.get()); -// nrangeIterator.addAndGet(o.nrangeIterator.get()); + nrangeCount.add(o.nrangeCount.get()); + nrangeIterator.add(o.nrangeIterator.get()); // Structural mutation. rootsSplit += o.rootsSplit; rootsJoined += o.rootsJoined; @@ -152,17 +154,21 @@ ntupleUpdateDelete += o.ntupleUpdateDelete; ntupleRemove += o.ntupleRemove; // IO reads - nodesRead.addAndGet(o.nodesRead.get()); - leavesRead.addAndGet(o.leavesRead.get()); - bytesRead.addAndGet(o.bytesRead.get()); - readNanos.addAndGet(o.readNanos.get()); - deserializeNanos.addAndGet(o.deserializeNanos.get()); + nodesRead.add(o.nodesRead.get()); + leavesRead.add(o.leavesRead.get()); + bytesRead.add(o.bytesRead.get()); + readNanos.add(o.readNanos.get()); + deserializeNanos.add(o.deserializeNanos.get()); + rawRecordsRead.add(o.rawRecordsRead.get()); + rawRecordsBytesRead.add(o.rawRecordsBytesRead.get()); // IO writes. nodesWritten += o.nodesWritten; leavesWritten += o.leavesWritten; bytesWritten += o.bytesWritten; writeNanos += o.writeNanos; serializeNanos += o.serializeNanos; + rawRecordsWritten += o.rawRecordsWritten; + rawRecordsBytesWritten+= o.rawRecordsBytesWritten; } @@ -189,12 +195,12 @@ t.ninserts.addAndGet(-o.ninserts.get()); t.nremoves.addAndGet(-o.nremoves.get()); // ILinearList - t.nindexOf.addAndGet(-o.nindexOf.get()); // Note: also does key search. - t.ngetKey.addAndGet(-o.ngetKey.get()); - t.ngetValue.addAndGet(-o.ngetValue.get()); + t.nindexOf.add(-o.nindexOf.get()); // Note: also does key search. + t.ngetKey.add(-o.ngetKey.get()); + t.ngetValue.add(-o.ngetValue.get()); // // IRangeQuery -// t.nrangeCount.addAndGet(-o.nrangeCount.get()); -// t.nrangeIterator.addAndGet(-o.nrangeIterator.get()); + t.nrangeCount.add(-o.nrangeCount.get()); + t.nrangeIterator.add(-o.nrangeIterator.get()); // Structural mutation. t.rootsSplit -= o.rootsSplit; t.rootsJoined -= o.rootsJoined; @@ -213,17 +219,21 @@ t.ntupleUpdateDelete -= o.ntupleUpdateDelete; t.ntupleRemove -= o.ntupleRemove; // IO reads - t.nodesRead.addAndGet(-o.nodesRead.get()); - t.leavesRead.addAndGet(-o.leavesRead.get()); - t.bytesRead.addAndGet(-o.bytesRead.get()); - t.readNanos.addAndGet(-o.readNanos.get()); - t.deserializeNanos.addAndGet(-o.deserializeNanos.get()); + t.nodesRead.add(-o.nodesRead.get()); + t.leavesRead.add(-o.leavesRead.get()); + t.bytesRead.add(-o.bytesRead.get()); + t.readNanos.add(-o.readNanos.get()); + t.deserializeNanos.add(-o.deserializeNanos.get()); + t.rawRecordsRead.add(-o.rawRecordsRead.get()); + t.rawRecordsBytesRead.add(-o.rawRecordsBytesRead.get()); // IO writes. t.nodesWritten -= o.nodesWritten; t.leavesWritten -= o.leavesWritten; t.bytesWritten -= o.bytesWritten; t.serializeNanos -= o.serializeNanos; t.writeNanos -= o.writeNanos; + t.rawRecordsWritten -= o.rawRecordsWritten; + t.rawRecordsBytesWritten -= o.rawRecordsBytesWritten; return t; @@ -242,9 +252,9 @@ public final AtomicLong ninserts = new AtomicLong(); public final AtomicLong nremoves = new AtomicLong(); // ILinearList - public final AtomicLong nindexOf = new AtomicLong(); - public final AtomicLong ngetKey = new AtomicLong(); - public final AtomicLong ngetValue = new AtomicLong(); + public final CAT nindexOf = new CAT(); + public final CAT ngetKey = new CAT(); + public final CAT ngetValue = new CAT(); /* * Note: These counters are hot spots with concurrent readers and do not @@ -252,8 +262,8 @@ * 1/26/2010. */ // IRangeQuery -// public final AtomicLong nrangeCount = new AtomicLong(); -// public final AtomicLong nrangeIterator = new AtomicLong(); + public final CAT nrangeCount = new CAT(); + public final CAT nrangeIterator = new CAT(); // Structural change (single-threaded, so plain variables are Ok). public int rootsSplit = 0; public int rootsJoined = 0; @@ -300,17 +310,20 @@ * transaction's write set and are counted here. */ public long ntupleInsertDelete = 0; + /** * #of pre-existing tuples whose value was updated to a non-deleted value * (includes update of a deleted tuple to a non-delet... [truncated message content] |
From: <tho...@us...> - 2011-04-11 17:57:15
|
Revision: 4394 http://bigdata.svn.sourceforge.net/bigdata/?rev=4394&view=rev Author: thompsonbry Date: 2011-04-11 17:57:09 +0000 (Mon, 11 Apr 2011) Log Message: ----------- Removed the jetty dependencies from the bigdata-sails/lib path. They are also checked in under bigdata/lib. I've decided to leave jetty in the bigdata/lib path since we will be converting the performance counter http support to the servlet API as well, and that is part of the bigdata model, so we will need the jetty dependencies for that module. I've also updated build.xml and .classpath to reflect the use of the versions of these jars in bigdata/lib rather than bigdata-sails/lib. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/.classpath branches/QUADS_QUERY_BRANCH/build.xml Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-sails/lib/jetty/ Modified: branches/QUADS_QUERY_BRANCH/.classpath =================================================================== --- branches/QUADS_QUERY_BRANCH/.classpath 2011-04-11 16:31:22 UTC (rev 4393) +++ branches/QUADS_QUERY_BRANCH/.classpath 2011-04-11 17:57:09 UTC (rev 4394) @@ -19,10 +19,16 @@ <classpathentry kind="src" path="ctc-striterators/src/test"/> <classpathentry kind="src" path="bigdata-perf/bsbm/src/test"/> <classpathentry kind="lib" path="bigdata-jini/lib/apache/zookeeper-3.2.1.jar"/> - <classpathentry kind="lib" path="bigdata/lib/dsi-utils-1.0.6-020610.jar"/> - <classpathentry kind="lib" path="bigdata/lib/lgpl-utils-1.0.6-020610.jar"/> - <classpathentry kind="lib" path="bigdata-rdf/lib/nxparser-6-22-2010.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/dsi-utils-1.0.6-020610.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/lgpl-utils-1.0.6-020610.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/nxparser-6-22-2010.jar"/> <classpathentry kind="lib" path="bigdata/lib/tuprolog/tuprolog-v2.1.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-continuation-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-http-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-io-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-server-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-util-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/servlet-api-2.5.jar"/> <classpathentry kind="src" path="lgpl-utils/src/java"/> <classpathentry kind="src" path="lgpl-utils/src/test"/> <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4j-3_6.jar"/> @@ -60,12 +66,6 @@ <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.3.0-onejar.jar"/> <classpathentry kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.3.0.jar"/> <classpathentry kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.3.0.jar"/> - <classpathentry kind="lib" path="bigdata/lib/high-scale-lib-v1.1.2.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/jetty/jetty-continuation-7.2.2.v20101205.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/jetty/jetty-http-7.2.2.v20101205.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/jetty/jetty-io-7.2.2.v20101205.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/jetty/jetty-server-7.2.2.v20101205.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/jetty/jetty-util-7.2.2.v20101205.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/jetty/servlet-api-2.5.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/high-scale-lib-v1.1.2.jar"/> <classpathentry kind="output" path="bin"/> </classpath> Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-04-11 16:31:22 UTC (rev 4393) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-04-11 17:57:09 UTC (rev 4394) @@ -3,7 +3,7 @@ <!-- @todo change the release target to put release notes into the root of the archives. --> <!-- @todo maven2 setup so we can run and publish unit tests results. --> <!-- $Id$ --> -<project name="bigdata" default="jar" basedir="."> +<project name="bigdata" default="bundleJar" basedir="."> <property file="build.properties" /> @@ -700,7 +700,7 @@ <property name="bigdata-jini.lib" location="${bigdata.dir}/bigdata-jini/lib/jini/lib" /> <property name="bigdata-rdf.lib" location="${bigdata.dir}/bigdata-rdf/lib" /> <property name="bigdata-sails.lib" location="${bigdata.dir}/bigdata-sails/lib" /> - <property name="bigdata-jetty.lib" location="${bigdata.dir}/bigdata-sails/lib/jetty" /> + <property name="bigdata-jetty.lib" location="${bigdata.dir}/bigdata/lib/jetty" /> <property name="bigdata-zookeeper.lib" location="${bigdata.dir}/bigdata-jini/lib/apache" /> <!-- Utility libraries --> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-04-13 20:33:19
|
Revision: 4397 http://bigdata.svn.sourceforge.net/bigdata/?rev=4397&view=rev Author: thompsonbry Date: 2011-04-13 20:33:10 +0000 (Wed, 13 Apr 2011) Log Message: ----------- This commit includes incremental progress toward support for full read/write tx for the RWStore and a further refactoring of the NanoSparqlServer. The read/write tx support is not yet finished. Martyn is looking into a bug with the free bits logic. I've refactored the NanoSparqlServer initialization a bit. You can now initialize the server either using NanoSparqlServer or a web.xml file. A sample web.xml file is included in the same package as the NanoSparqlServer and you can run that web.xml file using the WebAppUnassembled class in that package. The key bit to making this work was to isolate the life cycle logic within a ServletContextListener. When using the web.xml file, everything is explicitly configured in that web.xml file. When using the NanoSparqlServer, the same initialization parameters are explicitly set and the various listeners and servlets are initialized by hand (in the code). I had to add several more of the jetty dependencies in order to make this work, including: - jetty-servlet - jetty-security - jetty-webapp (only when using web.xml) - jetty-xml (only when using web.xml) build.xml has been updated so CI will find those dependencies. So has .classpath. I've also refactored the performance counters stuff so that it can be served up by jetty and it is now available at /counters when using NanoSparqlServer. There is more work to be done in layering in the performance counters properly, but this is a nice start. The "DELETE with BODY using POST" method of the REST API was not being invoked properly by the test suite and was not being delegated properly by the RESTServlet. Both of those issues have been fixed. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/.classpath branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/httpd/CounterSetHTTPD.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/CounterSetQuery.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/URLQueryModel.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/render/RendererFactory.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RWStrategy.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/AbstractRawStore.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/httpd/NanoHTTPD.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/NanoSparqlServer.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RESTServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/XMLBuilder.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/bench/TestNanoSparqlServer.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/bench/TestNanoSparqlServer_StartStop.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java branches/QUADS_QUERY_BRANCH/build.xml Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataBaseContext.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConfigParams.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/CountersServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/SparqlEndpointConfig.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WebAppUnassembled.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/web.xml branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestXMLBuilder.java Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataContext.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/JettySparqlServer.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestJettySparqlServer_StartStop.java Modified: branches/QUADS_QUERY_BRANCH/.classpath =================================================================== --- branches/QUADS_QUERY_BRANCH/.classpath 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/.classpath 2011-04-13 20:33:10 UTC (rev 4397) @@ -1,71 +1,75 @@ -<?xml version="1.0" encoding="UTF-8"?> -<classpath> - <classpathentry kind="src" path="bigdata-rdf/src/java"/> - <classpathentry kind="src" path="bigdata-rdf/src/samples"/> - <classpathentry kind="src" path="dsi-utils/src/java"/> - <classpathentry kind="src" path="bigdata/src/resources/logging"/> - <classpathentry kind="src" path="bigdata-sails/src/samples"/> - <classpathentry kind="src" path="bigdata-jini/src/test"/> - <classpathentry kind="src" path="bigdata-sails/src/java"/> - <classpathentry kind="src" path="bigdata/src/java"/> - <classpathentry kind="src" path="bigdata-rdf/src/test"/> - <classpathentry kind="src" path="bigdata/src/test"/> - <classpathentry kind="src" path="bigdata-sails/src/test"/> - <classpathentry kind="src" path="bigdata-jini/src/java"/> - <classpathentry kind="src" path="contrib/src/problems"/> - <classpathentry kind="src" path="bigdata/src/samples"/> - <classpathentry kind="src" path="dsi-utils/src/test"/> - <classpathentry kind="src" path="ctc-striterators/src/java"/> - <classpathentry kind="src" path="ctc-striterators/src/test"/> - <classpathentry kind="src" path="bigdata-perf/bsbm/src/test"/> - <classpathentry kind="lib" path="bigdata-jini/lib/apache/zookeeper-3.2.1.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/dsi-utils-1.0.6-020610.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/lgpl-utils-1.0.6-020610.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/nxparser-6-22-2010.jar"/> - <classpathentry kind="lib" path="bigdata/lib/tuprolog/tuprolog-v2.1.1.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-continuation-7.2.2.v20101205.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-http-7.2.2.v20101205.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-io-7.2.2.v20101205.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-server-7.2.2.v20101205.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-util-7.2.2.v20101205.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/servlet-api-2.5.jar"/> - <classpathentry kind="src" path="lgpl-utils/src/java"/> - <classpathentry kind="src" path="lgpl-utils/src/test"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4j-3_6.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/unimi/colt-1.2.0.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/cweb-commons-1.1-b2-dev.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/cweb-extser-0.1-b2-dev.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/cweb-junit-ext-1.1-b3-dev.jar" sourcepath="/junit-ext/src"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/junit-3.8.1.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/apache/log4j-1.2.15.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4jni.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/slf4j-api-1.4.3.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/slf4j-log4j12-1.4.3.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/iris-0.58.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/jgrapht-jdk1.5-0.7.1.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/browser.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/classserver.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/fiddler.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jini-core.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jini-ext.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jsk-lib.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jsk-platform.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jsk-resources.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/mahalo.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/mercury.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/norm.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/outrigger.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/reggie.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/start.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/sun-util.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/tools.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/unimi/fastutil-5.1.5.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/lucene/lucene-analyzers-3.0.0.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/lucene/lucene-core-3.0.0.jar"/> - <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/> - <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.3.0-onejar.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.3.0.jar"/> - <classpathentry kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.3.0.jar"/> - <classpathentry exported="true" kind="lib" path="bigdata/lib/high-scale-lib-v1.1.2.jar"/> - <classpathentry kind="output" path="bin"/> -</classpath> +<?xml version="1.0" encoding="UTF-8"?> +<classpath> + <classpathentry kind="src" path="bigdata-rdf/src/java"/> + <classpathentry kind="src" path="bigdata-rdf/src/samples"/> + <classpathentry kind="src" path="dsi-utils/src/java"/> + <classpathentry kind="src" path="bigdata/src/resources/logging"/> + <classpathentry kind="src" path="bigdata-sails/src/samples"/> + <classpathentry kind="src" path="bigdata-jini/src/test"/> + <classpathentry kind="src" path="bigdata-sails/src/java"/> + <classpathentry kind="src" path="bigdata/src/java"/> + <classpathentry kind="src" path="bigdata-rdf/src/test"/> + <classpathentry kind="src" path="bigdata/src/test"/> + <classpathentry kind="src" path="bigdata-sails/src/test"/> + <classpathentry kind="src" path="bigdata-jini/src/java"/> + <classpathentry kind="src" path="contrib/src/problems"/> + <classpathentry kind="src" path="bigdata/src/samples"/> + <classpathentry kind="src" path="dsi-utils/src/test"/> + <classpathentry kind="src" path="ctc-striterators/src/java"/> + <classpathentry kind="src" path="ctc-striterators/src/test"/> + <classpathentry kind="src" path="bigdata-perf/bsbm/src/test"/> + <classpathentry kind="lib" path="bigdata-jini/lib/apache/zookeeper-3.2.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/dsi-utils-1.0.6-020610.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/lgpl-utils-1.0.6-020610.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/nxparser-6-22-2010.jar"/> + <classpathentry kind="lib" path="bigdata/lib/tuprolog/tuprolog-v2.1.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-continuation-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-http-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-io-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-server-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-util-7.2.2.v20101205.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/servlet-api-2.5.jar"/> + <classpathentry kind="lib" path="bigdata/lib/jetty/jetty-servlet-7.2.2.v20101205.jar"/> + <classpathentry kind="lib" path="bigdata/lib/jetty/jetty-security-7.2.2.v20101205.jar"/> + <classpathentry kind="lib" path="bigdata/lib/jetty/jetty-webapp-7.2.2.v20101205.jar"/> + <classpathentry kind="lib" path="bigdata/lib/jetty/jetty-xml-7.2.2.v20101205.jar"/> + <classpathentry kind="src" path="lgpl-utils/src/java"/> + <classpathentry kind="src" path="lgpl-utils/src/test"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4j-3_6.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/unimi/colt-1.2.0.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/cweb-commons-1.1-b2-dev.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/cweb-extser-0.1-b2-dev.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/cweb-junit-ext-1.1-b3-dev.jar" sourcepath="/junit-ext/src"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/junit-3.8.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/apache/log4j-1.2.15.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/icu/icu4jni.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/slf4j-api-1.4.3.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/slf4j-log4j12-1.4.3.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/iris-0.58.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/jgrapht-jdk1.5-0.7.1.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/browser.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/classserver.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/fiddler.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jini-core.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jini-ext.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jsk-lib.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jsk-platform.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/jsk-resources.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/mahalo.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/mercury.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/norm.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/outrigger.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/reggie.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/start.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/sun-util.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata-jini/lib/jini/lib/tools.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/unimi/fastutil-5.1.5.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/lucene/lucene-analyzers-3.0.0.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/lucene/lucene-core-3.0.0.jar"/> + <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/> + <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.3.0-onejar.jar"/> + <classpathentry kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.3.0.jar"/> + <classpathentry kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.3.0.jar"/> + <classpathentry exported="true" kind="lib" path="bigdata/lib/high-scale-lib-v1.1.2.jar"/> + <classpathentry kind="output" path="bin"/> +</classpath> Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/httpd/CounterSetHTTPD.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/httpd/CounterSetHTTPD.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/httpd/CounterSetHTTPD.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -1,3 +1,26 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ package com.bigdata.counters.httpd; import java.io.ByteArrayInputStream; @@ -241,7 +264,7 @@ { // build model of the controller state. - final URLQueryModel model = new URLQueryModel(getService(), + final URLQueryModel model = URLQueryModel.getInstance(getService(), req.uri, req.params, req.headers); renderer = RendererFactory.get(model, counterSelector, mimeType); Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/CounterSetQuery.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/CounterSetQuery.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/CounterSetQuery.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -114,38 +114,9 @@ */ public class CounterSetQuery { - static protected final Logger log = Logger.getLogger(CounterSetQuery.class); + static private final Logger log = Logger.getLogger(CounterSetQuery.class); /** - * Create a {@link URLQueryModel} from a URL. - * - * @param url - * The URL. - * - * @return The {@link URLQueryModel} - * - * @throws UnsupportedEncodingException - */ - static private URLQueryModel newQueryModel(final URL url) - throws UnsupportedEncodingException { - - // Extract the URL query parameters. - final LinkedHashMap<String, Vector<String>> params = NanoHTTPD - .decodeParams(url.getQuery(), - new LinkedHashMap<String, Vector<String>>()); - - // add any relevant headers - final Map<String, String> headers = new TreeMap<String, String>( - new CaseInsensitiveStringComparator()); - - headers.put("host", url.getHost() + ":" + url.getPort()); - - return new URLQueryModel(null/* service */, url.toString(), params, - headers); - - } - - /** * Reads a list of {@link URL}s from a file. Blank lines and comment lines * are ignored. * @@ -477,7 +448,7 @@ for (URL url : urls) { - queries.add(newQueryModel(url)); + queries.add(URLQueryModel.getInstance(url)); } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/URLQueryModel.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/URLQueryModel.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/query/URLQueryModel.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -31,6 +31,8 @@ import java.io.File; import java.io.UnsupportedEncodingException; import java.lang.reflect.Field; +import java.net.URL; +import java.net.URLDecoder; import java.net.URLEncoder; import java.text.DateFormat; import java.text.DecimalFormat; @@ -38,13 +40,19 @@ import java.text.NumberFormat; import java.util.Arrays; import java.util.Collection; +import java.util.Enumeration; import java.util.HashMap; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.Map; +import java.util.TreeMap; import java.util.Vector; import java.util.regex.Pattern; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import org.CognitiveWeb.util.CaseInsensitiveStringComparator; import org.apache.log4j.Logger; import com.bigdata.counters.History; @@ -54,6 +62,7 @@ import com.bigdata.service.Event; import com.bigdata.service.IEventReportingService; import com.bigdata.service.IService; +import com.bigdata.util.httpd.NanoHTTPD; /** * The model for a URL used to query an {@link ICounterSelector}. @@ -63,7 +72,7 @@ */ public class URLQueryModel { - protected static transient final Logger log = Logger.getLogger(URLQueryModel.class); + private static transient final Logger log = Logger.getLogger(URLQueryModel.class); /** * Name of the URL query parameter specifying the starting path for the page @@ -207,10 +216,15 @@ */ final public LinkedHashMap<String,Vector<String>> params; +// /** +// * The request headers. +// */ +// final public Map<String,String> headers; + /** - * The request headers. + * The reconstructed request URL. */ - final public Map<String,String> headers; + private final String requestURL; /** * The value of the {@link #PATH} query parameter. @@ -329,7 +343,9 @@ final public File file; /** - * @param fed + * Factory for {@link NanoHTTPD} integration. + * + * @param service * The service object IFF one was specified when * {@link CounterSetHTTPD} was started. * @param uri @@ -345,16 +361,134 @@ * @param header * Header entries, percent decoded */ - public URLQueryModel(final IService service, final String uri, + public static URLQueryModel getInstance(// + final IService service,// + final String uri,// + final LinkedHashMap<String, Vector<String>> params,// + final Map<String, String> headers// + ) { + + /* + * Re-create the request URL, including the protocol, host, port, and + * path but not any query parameters. + */ + + final StringBuilder sb = new StringBuilder(); + + // protocol (known from the container). + sb.append("http://"); + + // host and port + sb.append(headers.get("host")); + + // path (including the leading '/') + sb.append(uri); + + final String requestURL = sb.toString(); + + return new URLQueryModel(service, uri, params, requestURL); + + } + + /** + * Factory for Servlet API integration. + * + * @param service + * The service object IFF one was specified when + * {@link CounterSetHTTPD} was started. + * @param req + * The request. + * @param resp + * The response. + */ + public static URLQueryModel getInstance(// + final IService service, + final HttpServletRequest req, + final HttpServletResponse resp + ) throws UnsupportedEncodingException { + + final String uri = URLDecoder.decode(req.getRequestURI(), "UTF-8"); + + final LinkedHashMap<String, Vector<String>> params = new LinkedHashMap<String, Vector<String>>(); + + @SuppressWarnings("unchecked") + final Enumeration<String> enames = req.getParameterNames(); + + while (enames.hasMoreElements()) { + + final String name = enames.nextElement(); + + final String[] values = req.getParameterValues(name); + + final Vector<String> value = new Vector<String>(); + + for (String v : values) { + + value.add(v); + + } + + params.put(name, value); + + } + + final String requestURL = req.getRequestURL().toString(); + + return new URLQueryModel(service, uri, params, requestURL); + + } + + /** + * Create a {@link URLQueryModel} from a URL. This is useful when serving + * historical performance counter data out of a file. + * + * @param url + * The URL. + * + * @return The {@link URLQueryModel} + * + * @throws UnsupportedEncodingException + */ + static public URLQueryModel getInstance(final URL url) + throws UnsupportedEncodingException { + + // Extract the URL query parameters. + final LinkedHashMap<String, Vector<String>> params = NanoHTTPD + .decodeParams(url.getQuery(), + new LinkedHashMap<String, Vector<String>>()); + + // add any relevant headers + final Map<String, String> headers = new TreeMap<String, String>( + new CaseInsensitiveStringComparator()); + + headers.put("host", url.getHost() + ":" + url.getPort()); + + return URLQueryModel.getInstance(null/* service */, url.toString(), + params, headers); + + } + + private URLQueryModel(final IService service, final String uri, final LinkedHashMap<String, Vector<String>> params, - final Map<String, String> headers) { + final String requestURL) { + if (uri == null) + throw new IllegalArgumentException(); + + if (params == null) + throw new IllegalArgumentException(); + + if (requestURL == null) + throw new IllegalArgumentException(); + this.uri = uri; this.params = params; - this.headers = headers; +// this.headers = headers; + this.requestURL = requestURL; + this.path = getProperty(params, PATH, ICounterSet.pathSeparator); if (log.isInfoEnabled()) @@ -691,19 +825,8 @@ */ public StringBuilder getRequestURL() { - final StringBuilder sb = new StringBuilder(); + return new StringBuilder(requestURL); - // protocol (known from the container). - sb.append("http://"); - - // host and port - sb.append(headers.get("host")); - - // path (including the leading '/') - sb.append(uri); - - return sb; - } /** Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/render/RendererFactory.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/render/RendererFactory.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/counters/render/RendererFactory.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -28,11 +28,6 @@ package com.bigdata.counters.render; -import java.io.IOException; -import java.io.Writer; - -import com.bigdata.counters.CounterSet; -import com.bigdata.counters.ICounter; import com.bigdata.counters.query.ICounterSelector; import com.bigdata.counters.query.ReportEnum; import com.bigdata.counters.query.URLQueryModel; Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -2708,6 +2708,18 @@ } + public void abortContext(final IAllocationContext context) { + + assertCanWrite(); + + if(_bufferStrategy instanceof RWStrategy) { + + ((RWStrategy) _bufferStrategy).abortContext(context); + + } + + } + final public long getRootAddr(final int index) { final ReadLock lock = _fieldReadWriteLock.readLock(); Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -604,24 +604,50 @@ synchronized (name2Addr) { /* - * FIXME In order to use shadow allocations, the unisolated - * index MUST be loaded using the IsolatedActionJournal. There - * are two places immediate below where it tests the cache and - * where it loads using the AbstractJournal, both of which are - * not appropriate as they fail to impose the - * IsolatedActionJournal with the consequence that the - * allocation contexts are not isolated. + * RWStore: There are two reasons why we must use shadow + * allocations for unisolated operations against the RWStore. + * + * (1) A rollback of an unisolated operation which caused + * mutations to the structure of an index will cause operations + * which access the index the rollback to fail since the backing + * allocations would have been immediately recycled by the + * RWStore (if running with a zero retention policy). + * + * (2) Allocations made during the unisolated operation should + * be immediately recycled if the operation is rolled back. This + * will not occur unless the unisolated operation makes those + * allocations against a shadow allocation context. Given that + * it does so, the rollback logic must also discard the shadow + * allocator in order for the shadowed allocations to be + * reclaimed immediately. + * + * In order to use shadow allocations, the unisolated index MUST + * be loaded using the IsolatedActionJournal. There are two + * places immediate below where it tests the cache and where it + * loads using the AbstractJournal, both of which are not + * appropriate as they fail to impose the IsolatedActionJournal + * with the consequence that the allocation contexts are not + * isolated. [Also, we do not want N2A to cache references to a + * B+Tree backed by a different shadow journal.] */ - // recover from unisolated index cache. - btree = name2Addr.getIndexCache(name); -// btree = null; // do not use the name2Addr cache. + if ((resourceManager.getLiveJournal().getBufferStrategy() instanceof RWStrategy)) { + /* + * Note: Do NOT use the name2Addr cache for the RWStore. + * Each unisolated index view MUST be backed by a shadow + * journal! + */ + btree = null; + } else { + // recover from unisolated index cache. + btree = name2Addr.getIndexCache(name); + } if (btree == null) { final IJournal tmp; - tmp = resourceManager.getLiveJournal(); -// tmp = getJournal();// wrap with the IsolatedActionJournal. +// tmp = resourceManager.getLiveJournal(); + tmp = getJournal();// wrap with the IsolatedActionJournal. // re-load btree from the store. btree = BTree.load(// @@ -636,8 +662,8 @@ // add to the unisolated index cache (must not exist). name2Addr.putIndexCache(name, btree, false/* replace */); - btree.setBTreeCounters((resourceManager) - .getIndexCounters(name)); + btree.setBTreeCounters(resourceManager + .getIndexCounters(name)); } @@ -686,7 +712,7 @@ /** * Given the name of an index and a {@link BTree}, obtain the view for all * source(s) described by the {@link BTree}s index partition metadata (if - * any),insert that view into the {@link #indexCache}, and return the view. + * any), inserts that view into the {@link #indexCache}, and return the view. * <p> * Note: This method is used both when registering a new index ({@link #registerIndex(String, BTree)}) * and when reading an index view from the source ({@link #getIndex(String)}). @@ -954,8 +980,6 @@ l.btree.writeCheckpoint(); - ((IsolatedActionJournal) getJournal()).detachContext(); - } if(INFO) { @@ -1054,11 +1078,18 @@ } + /* + * Do clean up. + */ + // clear n2a. n2a.clear(); // clear the commit list. commitList.clear(); + + // Detach the allocation context used by the operation. + ((IsolatedActionJournal) getJournal()).detachContext(); final long elapsed = System.nanoTime() - begin; @@ -1073,6 +1104,17 @@ } + /** + * Discard any allocations for writes on unisolated indices touched by the + * task for an {@link ITx#UNISOLATED} which fails, but while the task still + * has its locks. + */ + private void abortTask() { + + ((IsolatedActionJournal) getJournal()).abortContext(); + + } + /* * End isolation support for name2addr. */ @@ -1747,7 +1789,7 @@ * * @throws Exception * - * FIXME update javadoc to reflect the change in how the locks are acquired. + * @todo update javadoc to reflect the change in how the locks are acquired. */ private T doUnisolatedReadWriteTask() throws Exception { @@ -2010,7 +2052,7 @@ * prevent tasks from progressing. If there is strong lock contention then * writers will be more or less serialized. * - * FIXME javadoc update to reflect the {@link NonBlockingLockManager} + * @todo javadoc update to reflect the {@link NonBlockingLockManager} * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ @@ -2044,8 +2086,15 @@ // invoke doTask() on AbstractTask with locks. final T ret = delegate.doTask(); + // checkpoint while holding locks. + delegate.checkpointNanoTime = delegate.checkpointTask(); + + return ret; + + } catch(Throwable t) { + /* - * FIXME If there is an error in the task execution, then for + * RWStore: If there is an error in the task execution, then for * RWStore we need to explicitly undo the allocations for the * B+Tree(s) on which this task wrote. If we do not take this * step, then the records already written onto the store up to @@ -2053,12 +2102,11 @@ * succeeds. This is essentially a persistent memory leak on the * store. */ + delegate.abortTask(); - // checkpoint while holding locks. - delegate.checkpointNanoTime = delegate.checkpointTask(); - - return ret; - + // rethrow the exception. + throw new RuntimeException(t); + } finally { /* @@ -2248,7 +2296,7 @@ * declare a lock - such views will always be read-only and support * concurrent readers. */ - public IIndex getIndex(String name, long timestamp) { + public IIndex getIndex(final String name, final long timestamp) { if (timestamp == ITx.UNISOLATED) { @@ -2393,15 +2441,15 @@ // return delegate.getKeyBuilder(); // } - public void force(boolean metadata) { + public void force(final boolean metadata) { delegate.force(metadata); } - public int getByteCount(long addr) { + public int getByteCount(final long addr) { return delegate.getByteCount(addr); } - public ICommitRecord getCommitRecord(long timestamp) { + public ICommitRecord getCommitRecord(final long timestamp) { return delegate.getCommitRecord(timestamp); } @@ -2413,7 +2461,7 @@ return delegate.getFile(); } - public long getOffset(long addr) { + public long getOffset(final long addr) { return delegate.getOffset(addr); } @@ -2429,7 +2477,7 @@ return delegate.getResourceMetadata(); } - public long getRootAddr(int index) { + public long getRootAddr(final int index) { return delegate.getRootAddr(index); } @@ -2461,7 +2509,7 @@ // delegate.packAddr(out, addr); // } - public ByteBuffer read(long addr) { + public ByteBuffer read(final long addr) { return delegate.read(addr); } @@ -2469,19 +2517,19 @@ return delegate.size(); } - public long toAddr(int nbytes, long offset) { + public long toAddr(final int nbytes, final long offset) { return delegate.toAddr(nbytes, offset); } - public String toString(long addr) { + public String toString(final long addr) { return delegate.toString(addr); } - public IRootBlockView getRootBlock(long commitTime) { + public IRootBlockView getRootBlock(final long commitTime) { return delegate.getRootBlock(commitTime); } - public Iterator<IRootBlockView> getRootBlocks(long startTime) { + public Iterator<IRootBlockView> getRootBlocks(final long startTime) { return delegate.getRootBlocks(startTime); } @@ -2492,16 +2540,16 @@ * the IsolatedActionJournal as the IAllocationContext. This causes the * allocations to be scoped to the AbstractTask. */ - - public long write(ByteBuffer data) { + + public long write(final ByteBuffer data) { return delegate.write(data, this); } - public long write(ByteBuffer data, long oldAddr) { + public long write(final ByteBuffer data, final long oldAddr) { return delegate.write(data, oldAddr, this); } - public void delete(long addr) { + public void delete(final long addr) { delegate.delete(addr, this); } @@ -2517,12 +2565,16 @@ // return delegate.write(data, oldAddr, context); // } - public void detachContext() { - delegate.detachContext(this); - } + public void detachContext() { + delegate.detachContext(this); + } - public ScheduledFuture<?> addScheduledTask(Runnable task, - long initialDelay, long delay, TimeUnit unit) { + public void abortContext() { + delegate.abortContext(this); + } + + public ScheduledFuture<?> addScheduledTask(final Runnable task, + final long initialDelay, final long delay, final TimeUnit unit) { return delegate.addScheduledTask(task, initialDelay, delay, unit); } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RWStrategy.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RWStrategy.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/RWStrategy.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -265,12 +265,18 @@ } - public void detachContext(final IAllocationContext context) { - - m_store.detachContext(context); - - } + public void detachContext(final IAllocationContext context) { + + m_store.detachContext(context); + + } + public void abortContext(final IAllocationContext context) { + + m_store.abortContext(context); + + } + /** * Operation is not supported. * Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/AbstractRawStore.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/AbstractRawStore.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/rawstore/AbstractRawStore.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -96,4 +96,12 @@ public void detachContext(IAllocationContext context) { // NOP } + + /** + * The default implementation is a NOP. + */ + public void abortContext(final IAllocationContext context) { + // NOP + } + } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/httpd/NanoHTTPD.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/httpd/NanoHTTPD.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/util/httpd/NanoHTTPD.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -112,6 +112,9 @@ * </pre> * * @version $Id$ + * + * @deprecated This is being replaced by the use of the Servlet API and embedded + * use of jetty as a light weight servlet container. */ public class NanoHTTPD implements IServiceShutdown { Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlServer.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -167,6 +167,9 @@ * (2) I need to verify that the exclusive semaphore logic for the * unisolated sail connection works with cross thread access. Someone had * pointed out a bizarre hole in this.... + * + * @deprecated This has been replaced by the class of the same name in the + * <code>com.bigdata.sail.webapp</code> package. */ public class NanoSparqlServer extends AbstractHTTPD { Added: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataBaseContext.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataBaseContext.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataBaseContext.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -0,0 +1,33 @@ +package com.bigdata.rdf.sail.webapp; + +import org.apache.log4j.Logger; + +import com.bigdata.journal.IIndexManager; + +/** + * Context object provides access to the {@link IIndexManager}. + * + * @author Martyn Cutcher + */ +public class BigdataBaseContext { + + static private final Logger log = Logger.getLogger(BigdataBaseContext.class); + + private final IIndexManager m_indexManager; + + public BigdataBaseContext(final IIndexManager indexManager) { + + if (indexManager == null) + throw new IllegalArgumentException(); + + m_indexManager = indexManager; + + } + + public IIndexManager getIndexManager() { + + return m_indexManager; + + } + +} Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataBaseContext.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataContext.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataContext.java 2011-04-13 20:21:27 UTC (rev 4396) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataContext.java 2011-04-13 20:33:10 UTC (rev 4397) @@ -1,616 +0,0 @@ -package com.bigdata.rdf.sail.webapp; - -import info.aduna.xml.XMLWriter; - -import java.io.IOException; -import java.io.OutputStream; -import java.io.OutputStreamWriter; -import java.io.StringWriter; -import java.util.Map; -import java.util.UUID; -import java.util.concurrent.Callable; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ScheduledFuture; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicLong; - -import javax.servlet.ServletContext; -import javax.servlet.http.HttpServletRequest; - -import org.apache.log4j.Logger; -import org.openrdf.query.MalformedQueryException; -import org.openrdf.query.QueryLanguage; -import org.openrdf.query.parser.ParsedQuery; -import org.openrdf.query.parser.QueryParser; -import org.openrdf.query.parser.sparql.SPARQLParserFactory; -import org.openrdf.query.resultio.sparqlxml.SPARQLResultsXMLWriter; -import org.openrdf.repository.RepositoryException; -import org.openrdf.rio.rdfxml.RDFXMLWriter; -import org.openrdf.sail.SailException; - -import com.bigdata.bop.engine.QueryEngine; -import com.bigdata.journal.IIndexManager; -import com.bigdata.journal.ITx; -import com.bigdata.journal.TimestampUtility; -import com.bigdata.rawstore.Bytes; -import com.bigdata.rdf.sail.BigdataSail; -import com.bigdata.rdf.sail.BigdataSailGraphQuery; -import com.bigdata.rdf.sail.BigdataSailRepository; -import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSailTupleQuery; -import com.bigdata.rdf.sail.bench.NanoSparqlServer; -import com.bigdata.rdf.sail.webapp.BigdataServlet.QueryType; -import com.bigdata.rdf.store.AbstractTripleStore; -import com.bigdata.util.concurrent.ThreadPoolExecutorBaseStatisticsTask; -import com.bigdata.util.httpd.NanoHTTPD; - -/** - * - * @author Martyn Cutcher - * - */ -public class BigdataContext { - /** - * The logger for the concrete {@link NanoSparqlServer} class. The {@link NanoHTTPD} - * class has its own logger. - */ - static private final Logger log = Logger.getLogger(BigdataServlet.class); - - static private BigdataContext s_context; - - private final Config m_config; - private final IIndexManager m_indexManager; - private final QueryParser m_engine; - - private final ScheduledFuture<?> m_queueStatsFuture; - private final ThreadPoolExecutorBaseStatisticsTask m_queueSampleTask; - - /** - * The currently executing queries (does not include queries where a client - * has established a connection but the query is not running because the - * {@link #queryService} is blocking). - */ - protected final ConcurrentHashMap<Long/* queryId */, RunningQuery> m_queries = new ConcurrentHashMap<Long, RunningQuery>(); - - public Map<Long, RunningQuery> getQueries() { - return m_queries; - } - - /** - * Factory for the query identifiers. - */ - protected final AtomicLong m_queryIdFactory = new AtomicLong(); - - public AtomicLong getQueryIdFactory() { - return m_queryIdFactory; - } - - /** - * This call establishes the context to run the servlets that - * use it in an embedded server. - * - * @param config - * @param indexManager - * @return the BigdataContext - */ - synchronized static public BigdataContext establishContext(final Config config, final IIndexManager indexManager) - throws SailException, RepositoryException, IOException { - if (s_context == null) { - s_context = new BigdataContext(config, indexManager); - } - - return s_context; - } - - /** - * When a servlet starts up in a web container it establishes the BigdataContext - * that will be defined by the context parameters in the web.xml file. - * - * @param context - * @return the BigdataContext - */ - synchronized static BigdataContext establishContext(ServletContext context) { - if (s_context == null) { - // TODO get config info from servlet context - } - - return s_context; - } - - static BigdataContext getContext() { - return s_context; - } - - public BigdataContext(final Config config, final IIndexManager indexManager) throws IOException, SailException, - RepositoryException { - - if (config.namespace == null) - throw new IllegalArgumentException(); - - if (indexManager == null) - throw new IllegalArgumentException(); - - m_config = config; - - m_indexManager = indexManager; - - // used to parse qeries. - m_engine = new SPARQLParserFactory().getParser(); - - if (indexManager.getCollectQueueStatistics()) { - - final long initialDelay = 0; // initial delay in ms. - final long delay = 1000; // delay in ms. - final TimeUnit unit = TimeUnit.MILLISECONDS; - - // FIXME add mechanism for stats sampling - // queueSampleTask = new ThreadPoolExecutorBaseStatisticsTask( - // (ThreadPoolExecutor) queryService); - // - // queueStatsFuture = indexManager.addScheduledTask(queueSampleTask, - // initialDelay, delay, unit); - - m_queueSampleTask = null; - - m_queueStatsFuture = null; - } else { - - m_queueSampleTask = null; - - m_queueStatsFuture = null; - - } - - } - - public void shutdownNow() { - if(log.isInfoEnabled()) - log.info("Normal shutdown."); - - // Stop collecting queue statistics. - if (m_queueStatsFuture != null) - m_queueStatsFuture.cancel(true/* mayInterruptIfRunning */); - - } - - public IIndexManager getIndexManager() { - return m_indexManager; - } - - /** - * Configuration object. - */ - public static class Config { - - /** - * When true, suppress various things otherwise written on stdout. - */ - public boolean quiet = false; - - /** - * The port on which the server will answer requests -or- ZERO to - * use any open port. - */ - public int port = 80; - - /** - * The default namespace. - */ - public String namespace; - - /** - * The default timestamp used to query the default namespace. The server - * will obtain a read only transaction which reads from the commit point - * associated with this timestamp. - */ - public long timestamp = ITx.UNISOLATED; - - /** - * The #of threads to use to handle SPARQL queries -or- ZERO (0) for an - * unbounded pool. - */ - public int queryThreadPoolSize = 8; - -// /** -// * The capacity of the buffers for the pipe connecting the running query -// * to the HTTP response. -// */ -// public int bufferCapacity = Bytes.kilobyte32 * 1; - - public String resourceBase = "."; - - public Config() { - } - - } - - public Config getConfig() { - return m_config; - } - - public ThreadPoolExecutorBaseStatisticsTask getSampleTask() { - return m_queueSampleTask; - } - - public static void clear() { - if (s_context != null) { - s_context.shutdownNow(); - s_context = null; - } - } - - /** - * Abstract base class for running queries handles the timing, pipe, - * reporting, obtains the connection, and provides the finally {} semantics - * for each type of query task. - * - * @author <a href="mailto:tho...@us...">Bryan - * Thompson</a> - * @version $Id$ - */ - public abstract class AbstractQueryTask implements Callable<Void> { - - /** The namespace against which the query will be run. */ - private final String namespace; - - /** - * The timestamp of the view for that namespace against which the query - * will be run. - */ - private final long timestamp; - - /** The SPARQL query string. */ - protected final String queryStr; - - /** - * A symbolic constant indicating the type of query. - */ - protected final QueryType queryType; - - /** - * The negotiated MIME type to be used for the query response. - */ - protected final String mimeType; - - /** A pipe used to incrementally deliver the results to the client. */ - private final OutputStream os; - - /** - * Sesame has an option for a base URI during query evaluation. This - * provides a symbolic place holder for that URI in case we ever provide - * a hook to set it. - */ - protected final String baseURI = null; - - /** - * The queryId used by the {@link NanoSparqlServer}. - */ - protected final Long queryId; - - /** - * The queryId used by the {@link QueryEngine}. - */ - protected final UUID queryId2; - - /** - * - * @param namespace - * The namespace against which the query will be run. - * @param timestamp - * The timestamp of the view for that namespace against which - * the query will be run. - * @param queryStr - * The SPARQL query string. - * @param os - * A pipe used to incrementally deliver the results to the - * client. - */ - protected AbstractQueryTask(final String namespace, - final long timestamp, final String queryStr, - final QueryType queryType, - final String mimeType, - final OutputStream os) { - - this.namespace = namespace; - this.timestamp = timestamp; - this.queryStr = queryStr; - this.queryType = queryType; - this.mimeType = mimeType; - this.os = os; - this.queryId = Long.valueOf(m_queryIdFactory.incrementAndGet()); - this.queryId2 = UUID.randomUUID(); - - } - - /** - * Execute the query. - * - * @param cxn - * The connection. - * @param os - * Where the write the query results. - * - * @throws Exception - */ - abstract protected void doQuery(BigdataSailRepositoryConnection cxn, - OutputStream os) throws Exception; - - final public Void call() throws Exception { - final long begin = System.nanoTime(); - BigdataSailRepositoryConnection cxn = null; - try { - cxn = getQueryConnection(namespace, timestamp); - m_queries.put(queryId, new RunningQuery(queryId.longValue(),queryId2, - queryStr, begin)); - if(log.isTraceEnabled()) - log.trace("Query running..."); -// try { - doQuery(cxn, os); -// } catch(Throwable t) { -// /* -// * Log the query and the exception together. -// */ -// log.error(t.getLocalizedMessage() + ":\n" + queryStr, t); -// } - if(log.isTraceEnabled()) - log.trace("Query done - flushing results."); - os.flush(); - os.close(); - if(log.isTraceEnabled()) - log.trace("Query done - output stream closed."); - return null; - } catch (Throwable t) { - // launder and rethrow the exception. - throw BigdataServlet.launderThrowable(t, os, queryStr); - } finally { - m_queries.remove(queryId); - try { - os.close(); - } catch (Throwable t) { - log.error(t, t); - } - try { - if (cxn != null) - cxn.close(); - } catch (Throwable t) { - log.error(t, t); - } - } - } - - } - - /** - * Executes a tuple query. - */ - private class TupleQueryTask extends AbstractQueryTask { - - public TupleQueryTask(final String namespace, final long timestamp, - final String queryStr, final QueryType queryType, - final String mimeType, final OutputStream os) { - - super(namespace, timestamp, queryStr, queryType, mimeType, os); - - } - - protected void doQuery(final BigdataSailRepositoryConnection cxn, - final OutputStream os) throws Exception { - - final BigdataSailTupleQuery query = cxn.prepareTupleQuery( - QueryLanguage.SPARQL, queryStr, baseURI); - - if (true) { - StringWriter strw = new StringWriter(); - - query.evaluate(new SPARQLResultsXMLWriter(new XMLWriter(strw))); - - OutputStreamWriter outstr = new OutputStreamWriter(os); - String res = strw.toString(); - outstr.write(res); - outstr.flush(); - outstr.close(); - } else { - query.evaluate(new SPARQLResultsXMLWriter(new XMLWriter(os))); - } - } - - } - - /** - * Executes a graph query. - */ - private class GraphQueryTask extends AbstractQueryTask { - - public GraphQueryTask(final String namespace, final long timestamp, - final String queryStr, final QueryType queryType, - final String mimeType, final OutputStream os) { - - super(namespace, timestamp, queryStr, queryType, mimeType, os); - - } - - @Override - protected void doQuery(final BigdataSailRepositoryConnection cxn, - final OutputStream os) throws Exception { - - final BigdataSailGraphQuery query = cxn.prepareGraphQuery( - QueryLanguage.SPARQL, queryStr, baseURI); - - query.evaluate(new RDFXMLWriter(os)); - - } - - } - ... [truncated message content] |
From: <tho...@us...> - 2011-04-15 16:24:25
|
Revision: 4403 http://bigdata.svn.sourceforge.net/bigdata/?rev=4403&view=rev Author: thompsonbry Date: 2011-04-15 16:24:17 +0000 (Fri, 15 Apr 2011) Log Message: ----------- Added support for POST with URIs and unit tests for same. Added logic to verify mutation counters to the unit tests. Still need to add support for default-graph-uri and named-graph-uri protocol parameters. Just about read to wrap as a deployable webapp. Removed an old test helper class which was not used by anything anymore. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestAll.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RESTServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer.java Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/RDFLoadAndValidateHelper.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestAll.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestAll.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/TestAll.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -61,6 +61,8 @@ final TestSuite suite = new TestSuite("RDF operators"); + suite.addTestSuite(TestBOpUtility.class); + // Aggregate operators (COUNT, SUM, MIN, MAX, etc.) suite.addTest(com.bigdata.bop.rdf.aggregate.TestAll.suite()); Deleted: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/RDFLoadAndValidateHelper.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/RDFLoadAndValidateHelper.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/store/RDFLoadAndValidateHelper.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -1,213 +0,0 @@ -/** - -The Notice below must appear in each file of the Source Code of any -copy you distribute of the Licensed Product. Contributors to any -Modifications may add their own copyright notices to identify their -own contributions. - -License: - -The contents of this file are subject to the CognitiveWeb Open Source -License Version 1.1 (the License). You may not copy or use this file, -in either source code or executable form, except in compliance with -the License. You may obtain a copy of the License from - - http://www.CognitiveWeb.org/legal/license/ - -Software distributed under the License is distributed on an AS IS -basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See -the License for the specific language governing rights and limitations -under the License. - -Copyrights: - -Portions created by or assigned to CognitiveWeb are Copyright -(c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact -information for CognitiveWeb is available at - - http://www.CognitiveWeb.org - -Portions Copyright (c) 2002-2003 Bryan Thompson. - -Acknowledgements: - -Special thanks to the developers of the Jabber Open Source License 1.0 -(JOSL), from which this License was derived. This License contains -terms that differ from JOSL. - -Special thanks to the CognitiveWeb Open Source Contributors for their -suggestions and support of the Cognitive Web. - -Modifications: - -*/ -/* - * Created on May 8, 2008 - */ - -package com.bigdata.rdf.store; - -import java.io.File; -import java.io.FilenameFilter; -import java.util.concurrent.TimeUnit; - -import org.openrdf.rio.RDFFormat; - -import com.bigdata.counters.CounterSet; -import com.bigdata.rdf.load.ConcurrentDataLoader; -import com.bigdata.rdf.load.FileSystemLoader; -import com.bigdata.rdf.load.RDFLoadTaskFactory; -import com.bigdata.rdf.load.RDFVerifyTaskFactory; -import com.bigdata.rdf.rio.RDFParserOptions; -import com.bigdata.service.IBigdataFederation; - -/** - * Helper class for concurrent data load and post-load verification. - * - * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ - */ -public class RDFLoadAndValidateHelper { - - final IBigdataFederation fed; - - final int nthreads; - - final int bufferCapacity; - - final ConcurrentDataLoader service; - - final File file; - - final FilenameFilter filter; - - final RDFParserOptions parserOptions; - - final RDFFormat fallback = RDFFormat.RDFXML; - - final int nclients; - - final int clientNum; - - public RDFLoadAndValidateHelper(IBigdataFederation fed, int nthreads, - int bufferCapacity, File file, FilenameFilter filter) { - - this(fed,nthreads,bufferCapacity,file, filter, 1/*nclients*/,0/*clientNum*/); - - } - - public RDFLoadAndValidateHelper(IBigdataFederation fed, int nthreads, - int bufferCapacity, File file, FilenameFilter filter, int nclients, - int clientNum) { - - this.fed = fed; - - this.nthreads = nthreads; - - this.bufferCapacity = bufferCapacity; - - service = new ConcurrentDataLoader(fed, nthreads); - - this.file = file; - - this.filter = filter; - - this.nclients = nclients; - - this.clientNum = clientNum; - - this.parserOptions = new RDFParserOptions(); - - parserOptions.setVerifyData(false); - - } - - public void load(final AbstractTripleStore db) throws InterruptedException { - - // Note: no write buffer for 'verify' since it is not doing any writes! - final RDFLoadTaskFactory loadTaskFactory = new RDFLoadTaskFactory(db, - bufferCapacity, parserOptions, false/* deleteAfter */, fallback); - - final FileSystemLoader scanner = new FileSystemLoader(service, - nclients, clientNum); - - /* - * Note: Add the counters to be reported to the client's counter - * set. The added counters will be reported when the client reports its - * own counters. - */ - final CounterSet serviceRoot = fed.getServiceCounterSet(); - - final String relPath = "Concurrent Data Loader"; - - synchronized (serviceRoot) { - - if (serviceRoot.getPath(relPath) == null) { - - // Create path to CDL counter set. - final CounterSet tmp = serviceRoot.makePath(relPath); - - // Attach CDL counters. - tmp.attach(service.getCounters()); - - // Attach task factory counters. - tmp.attach(loadTaskFactory.getCounters()); - - } - - } - - // notify will run tasks. - loadTaskFactory.notifyStart(); - - // read files and run tasks. - scanner.process(file, filter, loadTaskFactory); - - // await completion of all tasks. - service.awaitCompletion(Long.MAX_VALUE, TimeUnit.MILLISECONDS); - - // notify did run tasks. - loadTaskFactory.notifyEnd(); - - System.err.println(loadTaskFactory.reportTotals()); - - } - - public void validate(AbstractTripleStore db) throws InterruptedException { - - final RDFVerifyTaskFactory verifyTaskFactory = new RDFVerifyTaskFactory( - db, bufferCapacity, parserOptions, false/*deleteAfter*/, fallback); - - final FileSystemLoader scanner = new FileSystemLoader(service, - nclients, clientNum); - - // notify will run tasks. - verifyTaskFactory.notifyStart(); - - // read files and run tasks. - scanner.process(file, filter, verifyTaskFactory); - - // await completion of all tasks. - service.awaitCompletion(Long.MAX_VALUE, TimeUnit.MILLISECONDS); - - // notify did run tasks. - verifyTaskFactory.notifyEnd(); - - // Report on #terms and #stmts parsed, found, and not found - System.err.println(verifyTaskFactory.reportTotals()); - - } - - public void shutdownNow() { - - service.shutdownNow(); - - } - - protected void finalize() throws Throwable { - - service.shutdownNow(); - - } - -} Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -252,6 +252,18 @@ } + /** + * Report a mutation count and elapsed time back to the user agent. + * + * @param resp + * The response. + * @param nmodified + * The mutation count. + * @param elapsed + * The elapsed time (milliseconds). + * + * @throws IOException + */ protected void reportModifiedCount(final HttpServletResponse resp, final long nmodified, final long elapsed) throws IOException { @@ -260,7 +272,7 @@ t.root("data").attr("modified", nmodified) .attr("milliseconds", elapsed).close(); - buildResponse(resp, HTTP_OK, MIME_TEXT_XML, t.toString()); + buildResponse(resp, HTTP_OK, MIME_APPLICATION_XML, t.toString()); } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataServlet.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -6,6 +6,7 @@ import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; +import java.io.Writer; import javax.servlet.ServletContext; import javax.servlet.http.HttpServlet; @@ -49,22 +50,23 @@ * Some HTTP response status codes */ public static final transient int - HTTP_OK = HttpServletResponse.SC_ACCEPTED, - HTTP_REDIRECT = HttpServletResponse.SC_TEMPORARY_REDIRECT, - HTTP_FORBIDDEN = HttpServletResponse.SC_FORBIDDEN, - HTTP_NOTFOUND = HttpServletResponse.SC_NOT_FOUND, + HTTP_OK = HttpServletResponse.SC_OK, +// HTTP_ACCEPTED = HttpServletResponse.SC_ACCEPTED, +// HTTP_REDIRECT = HttpServletResponse.SC_TEMPORARY_REDIRECT, +// HTTP_FORBIDDEN = HttpServletResponse.SC_FORBIDDEN, +// HTTP_NOTFOUND = HttpServletResponse.SC_NOT_FOUND, HTTP_BADREQUEST = HttpServletResponse.SC_BAD_REQUEST, HTTP_METHOD_NOT_ALLOWED = HttpServletResponse.SC_METHOD_NOT_ALLOWED, HTTP_INTERNALERROR = HttpServletResponse.SC_INTERNAL_SERVER_ERROR, HTTP_NOTIMPLEMENTED = HttpServletResponse.SC_NOT_IMPLEMENTED; /** - * Common mime types for dynamic content + * Common MIME types for dynamic content. */ public static final transient String MIME_TEXT_PLAIN = "text/plain", MIME_TEXT_HTML = "text/html", - MIME_TEXT_XML = "text/xml", +// MIME_TEXT_XML = "text/xml", MIME_DEFAULT_BINARY = "application/octet-stream", MIME_APPLICATION_XML = "application/xml", MIME_TEXT_JAVASCRIPT = "text/javascript", @@ -92,8 +94,8 @@ } - static protected void buildResponse(HttpServletResponse resp, int status, - String mimeType) throws IOException { + static protected void buildResponse(final HttpServletResponse resp, + final int status, final String mimeType) throws IOException { resp.setStatus(status); @@ -101,22 +103,31 @@ } - static protected void buildResponse(HttpServletResponse resp, int status, - String mimeType, String content) throws IOException { + static protected void buildResponse(final HttpServletResponse resp, final int status, + final String mimeType, final String content) throws IOException { buildResponse(resp, status, mimeType); - resp.getWriter().print(content); + final Writer w = resp.getWriter(); + w.write(content); + + w.flush(); + } - static protected void buildResponse(HttpServletResponse resp, int status, - String mimeType, InputStream content) throws IOException { + static protected void buildResponse(final HttpServletResponse resp, + final int status, final String mimeType, final InputStream content) + throws IOException { buildResponse(resp, status, mimeType); - copyStream(content, resp.getOutputStream()); + final OutputStream os = resp.getOutputStream(); + copyStream(content, os); + + os.flush(); + } /** Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -121,7 +121,7 @@ namespace); /* - * FIXME The RDF for the *query* will be generated using the + * TODO The RDF for the *query* will be generated using the * MIME type negotiated based on the Accept header (if any) * in the DELETE request. That means that we need to look at * the Accept header here and chose the right RDFFormat for Added: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -0,0 +1,407 @@ +package com.bigdata.rdf.sail.webapp; + +import java.net.HttpURLConnection; +import java.net.URL; +import java.net.URLConnection; +import java.util.Arrays; +import java.util.Vector; +import java.util.concurrent.atomic.AtomicLong; + +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.Statement; +import org.openrdf.rio.RDFFormat; +import org.openrdf.rio.RDFHandlerException; +import org.openrdf.rio.RDFParser; +import org.openrdf.rio.RDFParserFactory; +import org.openrdf.rio.RDFParserRegistry; +import org.openrdf.rio.helpers.RDFHandlerBase; +import org.openrdf.sail.SailException; + +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; + +/** + * Handler for INSERT operations. + * + * @author martyncutcher + */ +public class InsertServlet extends BigdataRDFServlet { + + /** + * + */ + private static final long serialVersionUID = 1L; + + static private final transient Logger log = Logger.getLogger(InsertServlet.class); + + public InsertServlet() { + + } + + /** + * <p> + * Perform an HTTP-POST, which corresponds to the basic CRUD operation + * "create" according to the generic interaction semantics of HTTP REST. The + * operation will be executed against the target namespace per the URI. + * </p> + * + * <pre> + * POST [/namespace/NAMESPACE] + * ... + * Content-Type: + * ... + * + * BODY + * </pre> + * <p> + * Where <code>BODY</code> is the new RDF content using the representation + * indicated by the <code>Content-Type</code>. + * </p> + * <p> + * -OR- + * </p> + * + * <pre> + * POST [/namespace/NAMESPACE] ?uri=URL + * </pre> + * <p> + * Where <code>URI</code> identifies a resource whose RDF content will be + * inserted into the database. The <code>uri</code> query parameter may + * occur multiple times. All identified resources will be loaded within a + * single native transaction. Bigdata provides snapshot isolation so you can + * continue to execute queries against the last commit point while this + * operation is executed. + * </p> + */ + @Override + protected void doPost(HttpServletRequest req, HttpServletResponse resp) { + try { + if (req.getParameter("uri") != null) { + doPostWithURIs(req, resp); + return; + } else { + doPostWithBody(req, resp); + return; + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * POST with request body containing statements to be inserted. + * + * @param req + * The request. + * + * @return The response. + * + * @throws Exception + */ + private void doPostWithBody(final HttpServletRequest req, + final HttpServletResponse resp) throws Exception { + + final long begin = System.currentTimeMillis(); + + final String baseURI = "";// @todo baseURI query parameter? + + final String namespace = getNamespace(req.getRequestURI()); + + final String contentType = req.getContentType(); + + if (log.isInfoEnabled()) + log.info("Request body: " + contentType); + + final RDFFormat format = RDFFormat.forMIMEType(contentType); + + if (format == null) { + + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + "Content-Type not recognized as RDF: " + contentType); + + return; + + } + + if (log.isInfoEnabled()) + log.info("RDFFormat=" + format); + + final RDFParserFactory rdfParserFactory = RDFParserRegistry + .getInstance().get(format); + + if (rdfParserFactory == null) { + + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + "Parser factory not found: Content-Type=" + + contentType + ", format=" + format); + + return; + + } + + try { + + final AtomicLong nmodified = new AtomicLong(0L); + + BigdataSailRepositoryConnection conn = null; + try { + + conn = getBigdataRDFContext() + .getUnisolatedConnection(namespace); + + /* + * There is a request body, so let's try and parse it. + */ + + final RDFParser rdfParser = rdfParserFactory.getParser(); + + rdfParser.setValueFactory(conn.getTripleStore() + .getValueFactory()); + + rdfParser.setVerifyData(true); + + rdfParser.setStopAtFirstError(true); + + rdfParser + .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + + rdfParser.setRDFHandler(new AddStatementHandler(conn + .getSailConnection(), nmodified)); + + /* + * Run the parser, which will cause statements to be inserted. + */ + rdfParser.parse(req.getInputStream(), baseURI); + + // Commit the mutation. + conn.commit(); + + final long elapsed = System.currentTimeMillis() - begin; + + reportModifiedCount(resp, nmodified.get(), elapsed); + + return; + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + + /** + * POST with URIs of resources to be inserted (loads the referenced + * resources). + * + * @param req + * The request. + * + * @return The response. + * + * @throws Exception + */ + private void doPostWithURIs(final HttpServletRequest req, + final HttpServletResponse resp) throws Exception { + + final long begin = System.currentTimeMillis(); + + final String namespace = getNamespace(req.getRequestURI()); + + final String[] uris = req.getParameterValues("uri"); + + if (uris == null) + throw new UnsupportedOperationException(); + + if (uris.length == 0) { + + reportModifiedCount(resp, 0L/* nmodified */, System + .currentTimeMillis() + - begin); + + return; + + } + + if (log.isInfoEnabled()) + log.info("URIs: " + Arrays.toString(uris)); + + // Before we do anything, make sure we have valid URLs. + final Vector<URL> urls = new Vector<URL>(uris.length); + + for (String uri : uris) { + + urls.add(new URL(uri)); + + } + + try { + + final AtomicLong nmodified = new AtomicLong(0L); + + BigdataSailRepositoryConnection conn = null; + try { + + conn = getBigdataRDFContext().getUnisolatedConnection( + namespace); + + for (URL url : urls) { + + URLConnection hconn = null; + try { + + hconn = url.openConnection(); + if (hconn instanceof HttpURLConnection) { + ((HttpURLConnection) hconn).setRequestMethod("GET"); + } + hconn.setDoInput(true); + hconn.setDoOutput(false); + hconn.setReadTimeout(0);// no timeout? http param? + + /* + * There is a request body, so let's try and parse it. + */ + + final String contentType = hconn.getContentType(); + + final RDFFormat format = RDFFormat + .forMIMEType(contentType); + + if (format == null) { + buildResponse(resp, HTTP_BADREQUEST, + MIME_TEXT_PLAIN, + "Content-Type not recognized as RDF: " + + contentType); + + return; + } + + final RDFParserFactory rdfParserFactory = RDFParserRegistry + .getInstance().get(format); + + if (rdfParserFactory == null) { + buildResponse(resp, HTTP_INTERNALERROR, + MIME_TEXT_PLAIN, + "Parser not found: Content-Type=" + + contentType); + + return; + } + + final RDFParser rdfParser = rdfParserFactory + .getParser(); + + rdfParser.setValueFactory(conn.getTripleStore() + .getValueFactory()); + + rdfParser.setVerifyData(true); + + rdfParser.setStopAtFirstError(true); + + rdfParser + .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + + rdfParser.setRDFHandler(new AddStatementHandler(conn + .getSailConnection(), nmodified)); + + /* + * Run the parser, which will cause statements to be + * inserted. + */ + + rdfParser.parse(hconn.getInputStream(), url + .toExternalForm()/* baseURL */); + + } finally { + + if (hconn instanceof HttpURLConnection) { + /* + * Disconnect, but only after we have loaded all the + * URLs. Disconnect is optional for java.net. It is a + * hint that you will not be accessing more resources on + * the connected host. By disconnecting only after all + * resources have been loaded we are basically assuming + * that people are more likely to load from a single + * host. + */ + ((HttpURLConnection) hconn).disconnect(); + } + + } + + } // next URI. + + // Commit the mutation. + conn.commit(); + + final long elapsed = System.currentTimeMillis(); + + reportModifiedCount(resp, nmodified.get(), elapsed); + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + + /** + * Helper class adds statements to the sail as they are visited by a parser. + */ + private static class AddStatementHandler extends RDFHandlerBase { + + private final BigdataSailConnection conn; + private final AtomicLong nmodified; + + public AddStatementHandler(final BigdataSailConnection conn, + final AtomicLong nmodified) { + this.conn = conn; + this.nmodified = nmodified; + } + + public void handleStatement(Statement stmt) throws RDFHandlerException { + + try { + + conn.addStatement(// + stmt.getSubject(), // + stmt.getPredicate(), // + stmt.getObject(), // + (Resource[]) (stmt.getContext() == null ? new Resource[] { } + : new Resource[] { stmt.getContext() })// + ); + + } catch (SailException e) { + + throw new RDFHandlerException(e); + + } + + nmodified.incrementAndGet(); + + } + + } + +} Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RESTServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RESTServlet.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RESTServlet.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -26,6 +26,7 @@ * The delegates to which we dispatch the various requests. */ private QueryServlet m_queryServlet; + private InsertServlet m_insertServlet; private DeleteServlet m_deleteServlet; private UpdateServlet m_updateServlet; @@ -42,10 +43,12 @@ super.init(); m_queryServlet = new QueryServlet(); + m_insertServlet = new InsertServlet(); m_updateServlet = new UpdateServlet(); m_deleteServlet = new DeleteServlet(); m_queryServlet.init(getServletConfig()); + m_insertServlet.init(getServletConfig()); m_updateServlet.init(getServletConfig()); m_deleteServlet.init(getServletConfig()); @@ -62,6 +65,11 @@ m_queryServlet = null; } + if (m_insertServlet != null) { + m_insertServlet.destroy(); + m_insertServlet = null; + } + if (m_updateServlet != null) { m_updateServlet.destroy(); m_updateServlet = null; @@ -87,34 +95,45 @@ } - /** - * A query can be submitted with a POST if a query parameter is provided. - * - * Otherwise delegate to the UpdateServlet - */ + /** + * A query can be submitted with a POST if a query parameter is provided. + * Otherwise delegate to the {@link InsertServlet} or {@link DeleteServlet} + * as appropriate. + */ @Override protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { if (req.getParameter("delete") != null) { + // DELETE via POST w/ Body. m_deleteServlet.doPost(req, resp); } else if (req.getParameter("query") != null) { - - m_queryServlet.doGet(req, resp); + + // QUERY via POST + m_queryServlet.doPost(req, resp); + + } else if(req.getParameter("uri") != null) { + + // INSERT via w/ URIs + m_insertServlet.doPost(req, resp); } else { - - m_updateServlet.doPut(req, resp); + + // INSERT via POST w/ Body + m_insertServlet.doPost(req, resp); } } - - /** - * A PUT request always delegates to the UpdateServlet - */ + + /** + * A PUT request always delegates to the {@link UpdateServlet}. + * <p> + * Note: The semantics of PUT are "DELETE+INSERT" for the API. PUT is not + * support for just "INSERT". Use POST instead for that purpose. + */ @Override protected void doPut(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { @@ -124,10 +143,7 @@ } /** - * A DELETE request will delete statements indicated by a provided namespace - * URI and an optional query parameter. - * - * Delegate to the DeleteServlet. + * Delegate to the {@link DeleteServlet}. */ @Override protected void doDelete(final HttpServletRequest req, Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -1,413 +1,35 @@ package com.bigdata.rdf.sail.webapp; -import java.io.IOException; -import java.net.HttpURLConnection; -import java.net.URL; -import java.util.Arrays; -import java.util.Vector; -import java.util.concurrent.atomic.AtomicLong; - import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; -import org.openrdf.model.Resource; -import org.openrdf.model.Statement; -import org.openrdf.rio.RDFFormat; -import org.openrdf.rio.RDFHandlerException; -import org.openrdf.rio.RDFParser; -import org.openrdf.rio.RDFParserFactory; -import org.openrdf.rio.RDFParserRegistry; -import org.openrdf.rio.helpers.RDFHandlerBase; -import org.openrdf.sail.SailException; -import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; - /** - * Handler for update (POST + * Handler for UPDATE operations (PUT). * * @author martyncutcher + * + * FIXME The UPDATE API is not finished yet. It will provide + * DELETE+INSERT semantics. */ public class UpdateServlet extends BigdataRDFServlet { - + /** * */ private static final long serialVersionUID = 1L; - - static private final transient Logger log = Logger.getLogger(UpdateServlet.class); + static private final transient Logger log = Logger + .getLogger(UpdateServlet.class); + public UpdateServlet() { - + } - /** - * <p> - * Perform an HTTP-POST, which corresponds to the basic CRUD operation - * "create" according to the generic interaction semantics of HTTP REST. The - * operation will be executed against the target namespace per the URI. - * </p> - * - * <pre> - * POST [/namespace/NAMESPACE] - * ... - * Content-Type: - * ... - * - * BODY - * </pre> - * <p> - * Where <code>BODY</code> is the new RDF content using the representation - * indicated by the <code>Content-Type</code>. - * </p> - * <p> - * -OR- - * </p> - * - * <pre> - * POST [/namespace/NAMESPACE] ?uri=URL - * </pre> - * <p> - * Where <code>URI</code> identifies a resource whose RDF content will be - * inserted into the database. The <code>uri</code> query parameter may - * occur multiple times. All identified resources will be loaded within a - * single native transaction. Bigdata provides snapshot isolation so you can - * continue to execute queries against the last commit point while this - * operation is executed. - * </p> - */ @Override - protected void doPost(HttpServletRequest req, HttpServletResponse resp) { - try { - if (req.getParameter("uri") != null) { - doPostWithURIs(req, resp); - return; - } else { - doPostWithBody(req, resp); - return; - } - } catch (Exception e) { - throw new RuntimeException(e); - } - } - - @Override - protected void doPut(HttpServletRequest req, HttpServletResponse resp) { - try { - doPostWithBody(req, resp); - } catch (Exception e) { - throw new RuntimeException(e); - } - } - - @Override - protected void doGet(HttpServletRequest req, HttpServletResponse resp) { - try { - buildResponse(resp, HTTP_METHOD_NOT_ALLOWED, MIME_TEXT_PLAIN, - "GET method not valid for update"); - } catch (IOException e) { - throw new RuntimeException(e); - } - } - - /** - * POST with request body containing statements to be inserted. - * - * @param req - * The request. - * - * @return The response. - * - * @throws Exception - */ - private void doPostWithBody(final HttpServletRequest req, - final HttpServletResponse resp) throws Exception { - - final long begin = System.currentTimeMillis(); - - final String baseURI = "";// @todo baseURI query parameter? - - final String namespace = getNamespace(req.getRequestURI()); - - final String contentType = req.getContentType(); - - if (log.isInfoEnabled()) - log.info("Request body: " + contentType); - - final RDFFormat format = RDFFormat.forMIMEType(contentType); - - if (format == null) { - - buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, - "Content-Type not recognized as RDF: " + contentType); - - return; - - } - - if (log.isInfoEnabled()) - log.info("RDFFormat=" + format); - - final RDFParserFactory rdfParserFactory = RDFParserRegistry - .getInstance().get(format); - - if (rdfParserFactory == null) { - - buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, - "Parser factory not found: Content-Type=" - + contentType + ", format=" + format); - - return; - - } - - try { - - final AtomicLong nmodified = new AtomicLong(0L); - - BigdataSailRepositoryConnection conn = null; - try { - - conn = getBigdataRDFContext() - .getUnisolatedConnection(namespace); - - /* - * There is a request body, so let's try and parse it. - */ - - final RDFParser rdfParser = rdfParserFactory.getParser(); - - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); - - rdfParser.setVerifyData(true); - - rdfParser.setStopAtFirstError(true); - - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - - rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); - - /* - * Run the parser, which will cause statements to be inserted. - */ - rdfParser.parse(req.getInputStream(), baseURI); - - // Commit the mutation. - conn.commit(); - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); - - return; - - } finally { - - if (conn != null) - conn.close(); - - } - - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - - } - + protected void doPut(HttpServletRequest req, HttpServletResponse resp) { + throw new UnsupportedOperationException(); } - /** - * POST with URIs of resources to be inserted (loads the referenced - * resources). - * - * @param req - * The request. - * - * @return The response. - * - * @throws Exception - */ - private void doPostWithURIs(final HttpServletRequest req, - final HttpServletResponse resp) throws Exception { - - final long begin = System.currentTimeMillis(); - - final String namespace = getNamespace(req.getRequestURI()); - - final String contentType = req.getContentType(); - - final String[] uris = req.getParameterValues("uri"); - - if (uris == null) - throw new UnsupportedOperationException(); - - if (uris.length == 0) { - - reportModifiedCount(resp, 0L/* nmodified */, System - .currentTimeMillis() - - begin); - - return; - - } - - if (log.isInfoEnabled()) - log.info("URIs: " + Arrays.toString(uris)); - - // Before we do anything, make sure we have valid URLs. - final Vector<URL> urls = new Vector<URL>(uris.length); - - for (String uri : uris) { - - urls.add(new URL(uri)); - - } - - try { - - final AtomicLong nmodified = new AtomicLong(0L); - - BigdataSailRepositoryConnection conn = null; - try { - - conn = getBigdataRDFContext().getUnisolatedConnection( - namespace); - - for (URL url : urls) { - - HttpURLConnection hconn = null; - try { - - hconn = (HttpURLConnection) url.openConnection(); - hconn.setRequestMethod("GET"); - hconn.setReadTimeout(0);// no timeout? http param? - - /* - * There is a request body, so let's try and parse it. - */ - - final RDFFormat format = RDFFormat - .forMIMEType(contentType); - - if (format == null) { - buildResponse(resp, HTTP_BADREQUEST, - MIME_TEXT_PLAIN, - "Content-Type not recognized as RDF: " - + contentType); - - return; - } - - final RDFParserFactory rdfParserFactory = RDFParserRegistry - .getInstance().get(format); - - if (rdfParserFactory == null) { - buildResponse(resp, HTTP_INTERNALERROR, - MIME_TEXT_PLAIN, - "Parser not found: Content-Type=" - + contentType); - - return; - } - - final RDFParser rdfParser = rdfParserFactory - .getParser(); - - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); - - rdfParser.setVerifyData(true); - - rdfParser.setStopAtFirstError(true); - - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - - rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); - - /* - * Run the parser, which will cause statements to be - * inserted. - */ - - rdfParser.parse(req.getInputStream(), url - .toExternalForm()/* baseURL */); - - } finally { - - if (hconn != null) - hconn.disconnect(); - - } // next URI. - - } - - // Commit the mutation. - conn.commit(); - - final long elapsed = System.currentTimeMillis(); - - reportModifiedCount(resp, nmodified.get(), elapsed); - - } finally { - - if (conn != null) - conn.close(); - - } - - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - - } - - } - - /** - * Helper class adds statements to the sail as they are visited by a parser. - */ - private static class AddStatementHandler extends RDFHandlerBase { - - private final BigdataSailConnection conn; - private final AtomicLong nmodified; - - public AddStatementHandler(final BigdataSailConnection conn, - final AtomicLong nmodified) { - this.conn = conn; - this.nmodified = nmodified; - } - - public void handleStatement(Statement stmt) throws RDFHandlerException { - - try { - - conn.addStatement(// - stmt.getSubject(), // - stmt.getPredicate(), // - stmt.getObject(), // - (Resource[]) (stmt.getContext() == null ? new Resource[] { } - : new Resource[] { stmt.getContext() })// - ); - - } catch (SailException e) { - - throw new RDFHandlerException(e); - - } - - nmodified.incrementAndGet(); - - } - - } - - } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer.java 2011-04-15 14:02:24 UTC (rev 4402) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer.java 2011-04-15 16:24:17 UTC (rev 4403) @@ -16,6 +16,9 @@ import java.util.Properties; import java.util.concurrent.atomic.AtomicLong; +import javax.xml.parsers.SAXParser; +import javax.xml.parsers.SAXParserFactory; + import junit.framework.TestCase2; import org.eclipse.jetty.server.Server; @@ -50,6 +53,8 @@ import org.openrdf.rio.RDFWriterFactory; import org.openrdf.rio.RDFWriterRegistry; import org.openrdf.rio.helpers.StatementCollector; +import org.xml.sax.Attributes; +import org.xml.sax.ext.DefaultHandler2; import com.bigdata.journal.BufferMode; import com.bigdata.journal.ITx; @@ -78,10 +83,13 @@ * writer, but not a parser) before we can test queries which CONNEG for a * JSON result set. * + * @todo Add tests for TRIPLES mode (the tests are running against a quads mode + * KB instance). + * * @todo Add tests for SIDS mode interchange of RDF XML. * - * @todo The methods which return a mutation count should verify the returned - * XML document. + * @todo Tests which verify the correct rejection of illegal or ill-formed + * requests. * * @todo Test suite for reading from a historical commit point. * @@ -95,7 +103,10 @@ private Server m_fixture; private String m_serviceURL; - final private static String requestPath = ""; + /** + * The request path for the REST API under test. + */ + final private static String requestPath = "/"; protected void setUp() throws Exception { @@ -105,24 +116,24 @@ m_jnl = new Journal(properties); - // Create the kb instance. - new LocalTripleStore(m_jnl, namespace, ITx.UNISOLATED, properties).create(); + // Create the kb instance. + new LocalTripleStore(m_jnl, namespace, ITx.UNISOLATED, properties) + .create(); -// /* -// * Service will not hold a read lock. -// * -// * Queries will read from the last commit point by default and will use -// * a read-only tx to have snapshot isolation for that query. -// */ -// config.timestamp = ITx.READ_COMMITTED; + // /* + // * Service will not hold a read lock. + // * + // * Queries will read from the last commit point by default and will + // use + // * a read-only tx to have snapshot isolation for that query. + // */ + // config.timestamp = ITx.READ_COMMITTED; final Map<String, String> initParams = new LinkedHashMap<String, String>(); { - - initParams.put( - ConfigParams.NAMESPACE, - namespace); - + + initParams.put(ConfigParams.NAMESPACE, namespace); + } // Start server for that kb instance. m_fixture = NanoSparqlServer @@ -130,7 +141,6 @@ m_fixture.start(); -// final int port = m_fixture.getPort(); final int port = m_fixture.getConnectors()[0].getLocalPort(); // log.info("Getting host address"); @@ -238,17 +248,17 @@ */ private static class QueryOptions { - /** The default timeout (ms). */ - private static final int DEFAULT_TIMEOUT = 2000; - /** The URL of the SPARQL endpoint. */ public String serviceURL = null; /** The HTTP method (GET, POST, etc). */ public String method = "GET"; - /** The SPARQL query. */ + /** + * The SPARQL query (this is a short hand for setting the + * <code>query</code> URL query parameter). + */ public String queryStr = null; - /** TODO DG and NG protocol params: The default graph URI (optional). */ - public String defaultGraphUri = null; + /** Request parameters to be formatted as URL query parameters. */ + public Map<String,String[]> requestParams; /** The accept header. */ public String acceptHeader = // BigdataRDFServlet.MIME_SPARQL_RESULTS_XML + ";q=1" + // @@ -281,26 +291,65 @@ * * @param opts * The query request. + * @param requestPath + * The request path, including the leading "/". * * @return The connection. */ protected HttpURLConnection doSparqlQuery(final QueryOptions opts, - final String servlet) throws Exception { + final String requestPath) throws Exception { - // Fully formed and encoded URL. - final String urlString = opts.serviceURL - + "/" - + servlet - + "?query=" - + URLEncoder.encode(opts.queryStr, "UTF-8") - + (opts.defaultGraphUri == null ? "" - : ("&default-graph-uri=" + URLEncoder.encode( - opts.defaultGraphUri, "UTF-8"))); + /* + * Generate the fully formed and encoded URL. + */ + final StringBuilder urlString = new StringBuilder(opts.serviceURL); + + urlString.append(requestPath); + + if (opts.queryStr != null) { + + if (opts.requestParams == null) { + + opts.requestParams = new LinkedHashMap<String, String[]>(); + + } + + opts.requestParams.put("query", new String[] { opts.queryStr }); + + } + + if (opts.requestParams != null) { + /* + * Add any URL query parameters. + */ + boolean first = true; + for (Map.Entry<String, String[]> e : opts.requestParams.entrySet()) { + urlString.append(first ? "?" : "&"); + first = false; + final String name = e.getKey(); + final String[] vals = e.getValue(); + if (vals == null) { + urlString.append(URLEncoder.encode(name, "UTF-8")); + } else { + for (String val : vals) { + urlString.append(URLEncoder.encode(name, "UTF-8")); + urlString.append("="); + urlString.append(URLEncoder.encode(val, "UTF-8")); + } + } + } // next Map.Entry +// + "?query=" +// + URLEncoder.encode(opts.queryStr, "UTF-8") +// + (opts.defaultGraphUri == null ? "" +// : ("&default-graph-uri=" + URLEncoder.encode( +// opts.defaultGraphUri, "UTF-8"))); + } + HttpURLConnection conn = null; try { - conn = doConnect(urlString, opts.method); + conn = doConnect(urlString.toString(), opts.method); conn.setReadTimeout(opts.timeout); @@ -521,7 +570,83 @@ } - /** + /** + * Class representing the result of a mutation operation against the REST + * API. + * + * TODO Refactor into the non-test code base? + */ + private static class MutationResult { + + /** The mutation count. */ + public final long mutationCount; + + /** The elapsed time for the operation. */ + public final long elapsedMillis; + + public MutationResult(final long mutationCount, final long elapsedMillis) { + this.mutationCount = mutationCount; + this.elapsedMillis = elapsedMillis; + } + + } + + protected MutationResult getMutationResult(final HttpURLConnection conn) throws Exception { + + try { + + final String contentType = conn.getContentType(); + + if (!contentType.startsWith(BigdataRDFServlet.MIME_APPLICATION_XML)) { + + fail("Expecting Content-Type of " + + BigdataRDFServlet.MIME_APPLICATION_XML + ", not " + + contentType); + + } + + final SAXParser parser = SAXParserFactory.newInstance().newSAXParser(); + + final AtomicLong mutationCount = new AtomicLong(); + final AtomicLong elapsedMillis = new AtomicLong(); + + /* + * For example: <data modified="5" milliseconds="112"/> + */ + parser.parse(conn.getInputStream(), new DefaultHandler2(){ + + public void startElement(final String uri, + final String localName, final String qName, + final Attributes attributes) { + + if (!"data".equals(qName)) + fail("Expecting: 'data', but have: uri=" + uri + + ", localName=" + localName + ", qName=" + + qName); + + mutationCount.set(Long.valueOf(attributes + .getValue("modified"))); + + elapsedMillis.set(Long.valueOf(attributes + .getValue("milliseconds"))); + + } + + }); + + // done. + return new MutationResult(mutationCount.get(), elapsedMillis.get()); + + } finally { + + // terminate the http connection. + conn.disconnect(); + + } + + } + + /** * Issue a "status" request against the service. */ public void test_STATUS() throws Exception { @@ -572,6 +697,61 @@ } /** + * Generates some statements and serializes them using the specified + * {@link RDFFormat}. + * + * @param ntriples + * The #of statements to generate. + * @param format + * The format. + * + * @return the serialized statements. + */ + private byte[] genNTRIPLES(final int ntriples, final RDFFormat format) + throws RDFHandlerException { + + final Graph g = new GraphImpl(); + + final ValueFactory f = new ValueFactoryImpl(); + + final URI s = f.createURI("http://www.bigdata.org/b"); + + final URI rdfType = f + .createURI("http://www.w3.org/1999/02/22-rdf-syntax-ns#type"); + + for (int i = 0; i < ntriples; i++) { + + final URI o = f.createURI("http://www.bigdata.org/c#" + i); + + g.add(s, rdfType, o); + + } + + final RDFWriterFactory writerFactory = RDFWriterRegistry.getInstance() + .get(format); + + if (writerFactory == null) + fail("RDFWriterFactory not found: format=" + format); + + final ByteArrayOutputStream baos = new ByteArrayOutputStream(); + + final RDFWriter writer = writerFactory.getWriter(baos); + + writer.startRDF(); + + for (Statement stmt : g) { + + writer.handleStatement(stmt); + + } + + writer.endRDF(); + + return baos.toByteArray(); + + } + + /** * "ASK" query using GET with an empty KB. */ public void test_GET_ASK() throws Exception { @@ -661,80 +841,104 @@ } - public void test_POST_UPDATE_withBody_RDFXML() throws Exception { + public void test_POST_INSERT_withBody_RDFXML() throws Exception { - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.RDFXML); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.RDFXML); } - public void test_POST_UPDATE_withBody_NTRIPLES() throws Exception { + public void test_POST_INSERT_withBody_NTRIPLES() throws Exception { - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.NTRIPLES); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.NTRIPLES); } - public void test_POST_UPDATE_withBody_N3() throws Exception { + public void test_POST_INSERT_withBody_N3() throws Exception { - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.N3); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.N3); } - public void test_POST_UPDATE_withBody_TURTLE() throws Exception { + public void test_POST_INSERT_withBody_TURTLE() throws Exception { - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.TURTLE); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.TURTLE); } // Note: quads interchange - public void test_POST_UPDATE_withBody_TRIG() throws Exception { + public void test_POST_INSERT_withBody_TRIG() throws Exception { - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.TRIG); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.TRIG); } // Note: quads interchange - public void test_POST_UPDATE_withBody_TRIX() throws Exception { + public void test_POST_INSERT_withBody_TRIX() throws Exception { - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.TRIX); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.TRIX); } - public void test_PUT_UPDATE_withBody_RDFXML() throws Exception { + /** + * Test ability to load data from a URI. + */ + public void test_POST_INSERT_LOAD_FROM_URIs() throws Exception { - do_UPDATE_withBody("PUT", 23, requestPath, RDFFormat.RDFXML); + // Verify nothing in the KB. + { + final String queryStr = "ASK where {?s ?p ?o}"; - } + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; - public void test_PUT_UPDATE_withBody_NTRIPLES() throws Exception { + opts.acceptHeader = BooleanQueryResultFormat.SPARQL + .getDefaultMIMEType(); + assertEquals(false, askResults(doSparqlQuery(opts, requestPath))); + } - do_UPDATE_withBody("PUT", 23, requestPath, RDFFormat.NTRIPLES); + // #of statements in that RDF file. + final long expectedStatementCount = 4; + + // Load the resource into the KB. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "POST"; + opts.requestParams = new LinkedHashMap<String, String[]>(); + opts.requestParams + .put( + "uri", + new String[] { "file:bigdata-rdf/src/test/com/bigdata/rdf/rio/small.rdf" }); - } + final MutationResult result = getMutationResult(doSparqlQuery(opts, + requestPath)); - public void test_PUT_UPDATE_withBody_N3() throws Exception { + assertEquals(expectedStatementCount, result.mutationCount); - do_UPDATE_withBody("PUT", 23, requestPath, RDFFormat.N3); + } - } + /* + * Verify KB has the loaded data. + */ + { + final String queryStr = "SELECT * where {?s ?p ?o}"; - public void test_PUT_UPDATE_withBody_TURTLE() throws Exception { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; - do_UPDATE_withBody("PUT", 23, requestPath, RDFFormat.TURTLE); + opts.acceptHeader = BooleanQueryResultFormat.SPARQL + .getDefaultMIMEType(); - } + assertEquals(expectedStatementCount, countResults(doSparqlQuery( + opts, requestPath))); + } - public void test_PUT_UPDATE_withBody_TRIG() throws Exception { - - do_UPDATE_withBody("PUT", 23, requestPath, RDFFormat.TRIG); - } - public void test_PUT_UPDATE_withBody_TRIX() throws Exception { - - do_UPDATE_withBody("PUT", 23, requestPath, RDFFormat.TRIX); - - } - /** * Select everything in the kb using a POST. */ @@ -747,11 +951,11 @@ opts.queryStr = queryStr; opts.method = "POST"; - do_UPDATE_withBody("POST", 23, requestPath, RDFFormat.NTRIPLES); + doInsertWithBodyTest("POST", 23, requestPath, RDFFormat.NTRIPLES); assertEquals(23, countResults(doSparqlQuery(opts, requestPath))); - do_DELETE_with_Query(requestPath, "construct {?s ?p ?o} where {?s ?p ?o}"); + doDeleteWit... [truncated message content] |
From: <tho...@us...> - 2011-04-15 21:02:39
|
Revision: 4407 http://bigdata.svn.sourceforge.net/bigdata/?rev=4407&view=rev Author: thompsonbry Date: 2011-04-15 21:02:31 +0000 (Fri, 15 Apr 2011) Log Message: ----------- Adding a basic webapp for the REST API (NanoSparqlServer deployment to a servlet container). Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConfigParams.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WebAppUnassembled.java Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-war/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/html/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/images/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/jsp/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/log4j.properties Removed Paths: ------------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/web.xml Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -62,7 +62,6 @@ import java.io.IOException; import java.util.Arrays; import java.util.Collections; -import java.util.HashMap; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.Map; @@ -83,16 +82,12 @@ import org.openrdf.model.ValueFactory; import org.openrdf.model.impl.ContextStatementImpl; import org.openrdf.model.impl.NamespaceImpl; -import org.openrdf.query.Binding; import org.openrdf.query.BindingSet; import org.openrdf.query.Dataset; import org.openrdf.query.QueryEvaluationException; -import org.openrdf.query.algebra.LangMatches; import org.openrdf.query.algebra.QueryRoot; import org.openrdf.query.algebra.StatementPattern; import org.openrdf.query.algebra.TupleExpr; -import org.openrdf.query.algebra.ValueConstant; -import org.openrdf.query.algebra.Var; import org.openrdf.query.algebra.evaluation.impl.BindingAssigner; import org.openrdf.query.algebra.evaluation.impl.CompareOptimizer; import org.openrdf.query.algebra.evaluation.impl.ConjunctiveConstraintSplitter; @@ -101,10 +96,6 @@ import org.openrdf.query.algebra.evaluation.impl.QueryJoinOptimizer; import org.openrdf.query.algebra.evaluation.impl.SameTermFilterOptimizer; import org.openrdf.query.algebra.evaluation.util.QueryOptimizerList; -import org.openrdf.query.algebra.helpers.QueryModelVisitorBase; -import org.openrdf.query.impl.BindingImpl; -import org.openrdf.query.impl.DatasetImpl; -import org.openrdf.query.impl.MapBindingSet; import org.openrdf.sail.NotifyingSailConnection; import org.openrdf.sail.Sail; import org.openrdf.sail.SailConnection; @@ -688,9 +679,31 @@ * @return The {@link LocalTripleStore}. */ private static LocalTripleStore createLTS(final Properties properties) { - + final Journal journal = new Journal(properties); + return createLTS(journal, properties); + + } + + /** + * If the {@link LocalTripleStore} with the appropriate namespace exists, + * then return it. Otherwise, create the {@link LocalTripleStore}. When the + * properties indicate that full transactional isolation should be + * supported, a new {@link LocalTripleStore} will be created within a + * transaction in order to ensure that it uses isolatable indices. Otherwise + * it is created using the {@link ITx#UNISOLATED} connection. + * + * @param properties + * The properties. + * + * @return The {@link LocalTripleStore}. + */ + public static LocalTripleStore createLTS(final Journal journal, + final Properties properties) { + +// final Journal journal = new Journal(properties); + final ITransactionService txService = journal.getTransactionManager().getTransactionService(); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -189,8 +189,7 @@ * we should bundle the (namespace,timestamp) together as a single * object). */ - protected long getTimestamp(final String uri, - final HttpServletRequest req) { + protected long getTimestamp(final HttpServletRequest req) { final String timestamp = req.getParameter("timestamp"); @@ -214,16 +213,17 @@ * * @return The namespace. */ - protected String getNamespace(final String uri) { - -// // locate the "//" after the protocol. -// final int index = uri.indexOf("//"); + protected String getNamespace(final HttpServletRequest req) { - int snmsp = uri.indexOf("/namespace/"); + final String uri = req.getRequestURI(); + + final int snmsp = uri.indexOf("/namespace/"); if (snmsp == -1) { + // use the default namespace. return getConfig().namespace; + } // locate the next "/" in the URI path. Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -47,9 +47,11 @@ import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; import com.bigdata.rdf.sail.BigdataSail; +import com.bigdata.rdf.store.ScaleOutTripleStore; import com.bigdata.service.AbstractDistributedFederation; import com.bigdata.service.IBigdataFederation; import com.bigdata.service.jini.JiniClient; +import com.bigdata.service.jini.JiniFederation; /** * Listener provides life cycle management of the {@link IIndexManager} by @@ -89,17 +91,33 @@ final String namespace; { - namespace = context.getInitParameter(ConfigParams.NAMESPACE); + String s = context.getInitParameter(ConfigParams.NAMESPACE); - if (namespace == null) - throw new RuntimeException("Required: " - + ConfigParams.NAMESPACE); + if (s == null) + s = ConfigParams.DEFAULT_NAMESPACE; + namespace = s; + if (log.isInfoEnabled()) - log.info("namespace: " + namespace); + log.info(ConfigParams.NAMESPACE + "=" + namespace); } + final boolean create; + { + + final String s = context.getInitParameter(ConfigParams.CREATE); + + if (s != null) + create = Boolean.valueOf(s); + else + create = ConfigParams.DEFAULT_CREATE; + + if (log.isInfoEnabled()) + log.info(ConfigParams.CREATE + "=" + create); + + } + final IIndexManager indexManager; if (context.getAttribute(IIndexManager.class.getName()) != null) { @@ -128,7 +146,7 @@ + ConfigParams.PROPERTY_FILE); if (log.isInfoEnabled()) - log.info("propertyFile: " + propertyFile); + log.info(ConfigParams.PROPERTY_FILE + "=" + propertyFile); indexManager = openIndexManager(propertyFile); @@ -137,6 +155,56 @@ } + if(create) { + + // Attempt to resolve the namespace. + if (indexManager.getResourceLocator().locate(namespace, + ITx.UNISOLATED) == null) { + + log.warn("Creating KB instance: namespace=" + namespace); + + if (indexManager instanceof Journal) { + + /* + * Create a local triple store. + * + * Note: This hands over the logic to some custom code + * located on the BigdataSail. + */ + + final Journal jnl = (Journal) indexManager; + + final Properties properties = new Properties(jnl + .getProperties()); + + // override the namespace. + properties.setProperty(BigdataSail.Options.NAMESPACE, + namespace); + + // create the appropriate as configured triple/quad store. + BigdataSail.createLTS(jnl, properties); + + } else { + + /* + * Register triple store for scale-out. + */ + + final JiniFederation<?> fed = (JiniFederation<?>) indexManager; + + final Properties properties = fed.getClient().getProperties(); + + final ScaleOutTripleStore lts = new ScaleOutTripleStore( + indexManager, namespace, ITx.UNISOLATED, properties); + + lts.create(); + + } + + } // if( tripleStore == null ) + + } // if( create ) + txs = (indexManager instanceof Journal ? ((Journal) indexManager).getTransactionManager() .getTransactionService() : ((IBigdataFederation<?>) indexManager).getTransactionService()); @@ -200,6 +268,10 @@ } + if (log.isInfoEnabled()) + log.info(ConfigParams.QUERY_THREAD_POOL_SIZE + "=" + + queryThreadPoolSize); + } final SparqlEndpointConfig config = new SparqlEndpointConfig( Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConfigParams.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConfigParams.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConfigParams.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -14,13 +14,23 @@ String PROPERTY_FILE = "property-file"; /** - * The default bigdata namespace of for the triple or quad store - * instance to be exposed (there can be many triple or quad store - * instances within a bigdata instance). + * The default bigdata namespace of for the triple or quad store instance to + * be exposed (default {@link #DEFAULT_NAMESPACE}). Note that there can be + * many triple or quad store instances within a bigdata instance. */ String NAMESPACE = "namespace"; + + String DEFAULT_NAMESPACE = "kb"; /** + * When <code>true</code>, an instance of the specified {@link #NAMESPACE} + * will be created if none exists. + */ + String CREATE = "create"; + + boolean DEFAULT_CREATE = true; + + /** * The size of the thread pool used to service SPARQL queries -OR- ZERO * (0) for an unbounded thread pool (default * {@value #DEFAULT_QUERY_THREAD_POOL_SIZE}). Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -76,9 +76,9 @@ final long begin = System.currentTimeMillis(); - final String baseURI = "";// @todo baseURI query parameter? + final String baseURI = req.getRequestURL().toString(); - final String namespace = getNamespace(req.getRequestURI()); + final String namespace = getNamespace(req); final String queryStr = req.getParameter("query"); @@ -227,9 +227,9 @@ final long begin = System.currentTimeMillis(); - final String baseURI = "";// @todo baseURI query parameter? + final String baseURI = req.getRequestURL().toString(); - final String namespace = getNamespace(req.getRequestURI()); + final String namespace = getNamespace(req); final String contentType = req.getContentType(); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -107,9 +107,9 @@ final long begin = System.currentTimeMillis(); - final String baseURI = "";// @todo baseURI query parameter? + final String baseURI = req.getRequestURL().toString(); - final String namespace = getNamespace(req.getRequestURI()); + final String namespace = getNamespace(req); final String contentType = req.getContentType(); @@ -218,7 +218,7 @@ final long begin = System.currentTimeMillis(); - final String namespace = getNamespace(req.getRequestURI()); + final String namespace = getNamespace(req); final String[] uris = req.getParameterValues("uri"); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -79,9 +79,9 @@ private void doQuery(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - final String namespace = getNamespace(req.getRequestURI()); + final String namespace = getNamespace(req); - final long timestamp = getTimestamp(req.getRequestURI(), req); + final long timestamp = getTimestamp(req); final String queryStr = req.getParameter("query"); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -104,8 +104,7 @@ // General information on the connected kb. current.node("pre", getBigdataRDFContext().getKBInfo( - getNamespace(req.getRequestURI()), - getTimestamp(req.getRequestURI(), req)).toString()); + getNamespace(req), getTimestamp(req)).toString()); } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WebAppUnassembled.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WebAppUnassembled.java 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/WebAppUnassembled.java 2011-04-15 21:02:31 UTC (rev 4407) @@ -65,7 +65,7 @@ int port = 80; // default config file. - String file = "bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/web.xml"; + String file = "bigdata-war/src/resources/WEB-INF/web.xml"; /* * Handle all arguments starting with "-". These should appear before Deleted: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/web.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/web.xml 2011-04-15 17:07:00 UTC (rev 4406) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/web.xml 2011-04-15 21:02:31 UTC (rev 4407) @@ -1,63 +0,0 @@ -<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> -<web-app> - <display-name>Bigdata</display-name> - <description>Bigdata</description> - <context-param> - <param-name>property-file</param-name> - <param-value>bigdata-perf/lubm/WORMStore.properties</param-value> - <description>The property file (for a standalone database instance) or the - jini configuration file (for a federation). The file MUST end with either - ".properties" or ".config".</description> - </context-param> - <context-param> - <param-name>namespace</param-name> - <param-value>LUBM_U1</param-value> - <description>The default bigdata namespace of for the triple or quad store - instance to be exposed (there can be many triple or quad store instances - within a bigdata instance).</description> - </context-param> - <context-param> - <param-name>query-thread-pool-size</param-name> - <param-value>8</param-value> - <description>The size of the thread pool used to service SPARQL queries -OR- - ZERO (0) for an unbounded thread pool.</description> - </context-param> - <context-param> - <param-name>force-overflow</param-name> - <param-value>false</param-value> - <description>Force a compacting merge of all shards on all data - services in a bigdata federation (this option should only be - used for benchmarking purposes).</description> - </context-param> - <listener> - <listener-class>com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener</listener-class> - </listener> - <servlet> - <servlet-name>REST API</servlet-name> - <display-name>REST API</display-name> - <servlet-class>com.bigdata.rdf.sail.webapp.RESTServlet</servlet-class> - <load-on-startup>0</load-on-startup> - </servlet> - <servlet> - <servlet-name>Status</servlet-name> - <display-name>Status</display-name> - <servlet-class>com.bigdata.rdf.sail.webapp.StatusServlet</servlet-class> - </servlet> - <servlet> - <servlet-name>Counters</servlet-name> - <display-name>Performance counters</display-name> - <servlet-class>com.bigdata.rdf.sail.webapp.CountersServlet</servlet-class> - </servlet> - <servlet-mapping> - <servlet-name>REST API</servlet-name> - <url-pattern>/</url-pattern> - </servlet-mapping> - <servlet-mapping> - <servlet-name>Status</servlet-name> - <url-pattern>/status</url-pattern> - </servlet-mapping> - <servlet-mapping> - <servlet-name>Counters</servlet-name> - <url-pattern>/counters</url-pattern> - </servlet-mapping> -</web-app> Added: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties 2011-04-15 21:02:31 UTC (rev 4407) @@ -0,0 +1,30 @@ +# +# Note: These options are applied when the journal and the triple store are +# first created. + +## +## Journal options. +## + +# The backing file. +com.bigdata.journal.AbstractJournal.file=bigdata.jnl + +# The persistence engine. Use 'Disk' for the WORM or 'DiskRW' for the RWStore. +com.bigdata.journal.AbstractJournal.bufferMode=DiskRW + +com.bigdata.btree.writeRetentionQueue.capacity=4000 +com.bigdata.btree.BTree.branchingFactor=128 + +# 200M initial extent. +com.bigdata.journal.AbstractJournal.initialExtent=209715200 +com.bigdata.journal.AbstractJournal.maximumExtent=209715200 + +## +## Setup for QUADS mode without the full text index. +## +com.bigdata.rdf.sail.truthMaintenance=false +com.bigdata.rdf.store.AbstractTripleStore.quads=false +com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=false +com.bigdata.rdf.store.AbstractTripleStore.textIndex=false +com.bigdata.rdf.store.AbstractTripleStore.axiomsClass=com.bigdata.rdf.axioms.NoAxioms +#com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=true Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml 2011-04-15 21:02:31 UTC (rev 4407) @@ -0,0 +1,70 @@ +<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> +<web-app> + <display-name>Bigdata</display-name> + <description>Bigdata</description> + <context-param> + <param-name>property-file</param-name> + <param-value>../webapps/bigdata/RWStore.properties</param-value> + <description>The property file (for a standalone database instance) or the + jini configuration file (for a federation). The file MUST end with either + ".properties" or ".config". This path is relative to the directory from + which you start the servlet container so you may have to edit it for your + installation, e.g., by specifying an absolution path. Also, it is a good + idea to review the RWStore.properties file as well and specify the location + of the database file on which it will persist your data.</description> + </context-param> + <context-param> + <param-name>namespace</param-name> + <param-value>kb</param-value> + <description>The default bigdata namespace of for the triple or quad store + instance to be exposed.</description> + </context-param> + <context-param> + <param-name>create</param-name> + <param-value>true</param-value> + <description>When true a new triple or quads store instance will be created + if none is found at that namespace.</description> + </context-param> + <context-param> + <param-name>query-thread-pool-size</param-name> + <param-value>16</param-value> + <description>The size of the thread pool used to service SPARQL queries -OR- + ZERO (0) for an unbounded thread pool.</description> + </context-param> + <listener> + <listener-class>com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener</listener-class> + </listener> + <servlet> + <servlet-name>REST API</servlet-name> + <display-name>REST API</display-name> + <description>The REST API, including a SPARQL end point, as described at + https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer + </description> + <servlet-class>com.bigdata.rdf.sail.webapp.RESTServlet</servlet-class> + <load-on-startup>0</load-on-startup> + </servlet> + <servlet> + <servlet-name>Status</servlet-name> + <display-name>Status</display-name> + <description>A status page.</description> + <servlet-class>com.bigdata.rdf.sail.webapp.StatusServlet</servlet-class> + </servlet> + <servlet> + <servlet-name>Counters</servlet-name> + <display-name>Performance counters</display-name> + <description>Performance counters.</description> + <servlet-class>com.bigdata.rdf.sail.webapp.CountersServlet</servlet-class> + </servlet> + <servlet-mapping> + <servlet-name>REST API</servlet-name> + <url-pattern>/</url-pattern> + </servlet-mapping> + <servlet-mapping> + <servlet-name>Status</servlet-name> + <url-pattern>/status</url-pattern> + </servlet-mapping> + <servlet-mapping> + <servlet-name>Counters</servlet-name> + <url-pattern>/counters</url-pattern> + </servlet-mapping> +</web-app> Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/log4j.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/log4j.properties (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/log4j.properties 2011-04-15 21:02:31 UTC (rev 4407) @@ -0,0 +1,80 @@ +# Default log4j configuration. See the individual classes for the +# specific loggers, but generally they are named for the class in +# which they are defined. + +# Default log4j configuration for testing purposes. +# +# You probably want to set the default log level to ERROR. +# +log4j.rootCategory=WARN, dest1 +#log4j.rootCategory=WARN, dest2 + +# Loggers. +# Note: logging here at INFO or DEBUG will significantly impact throughput! +log4j.logger.com.bigdata=WARN +log4j.logger.com.bigdata.btree=WARN + +# Normal data loader (single threaded). +#log4j.logger.com.bigdata.rdf.store.DataLoader=INFO + +# dest1 +log4j.appender.dest1=org.apache.log4j.ConsoleAppender +log4j.appender.dest1.layout=org.apache.log4j.PatternLayout +log4j.appender.dest1.layout.ConversionPattern=%-5p: %F:%L: %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-5p: %r %l: %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-5p: %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-4r(%d) [%t] %-5p %c(%l:%M) %x - %m%n + +# dest2 includes the thread name and elapsed milliseconds. +# Note: %r is elapsed milliseconds. +# Note: %t is the thread name. +# See http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html +log4j.appender.dest2=org.apache.log4j.ConsoleAppender +log4j.appender.dest2.layout=org.apache.log4j.PatternLayout +log4j.appender.dest2.layout.ConversionPattern=%-5p: %r %X{hostname} %X{serviceUUID} %X{taskname} %X{timestamp} %X{resources} %t %l: %m%n + +## +# Rule execution log. This is a formatted log file (comma delimited). +log4j.logger.com.bigdata.relation.rule.eval.RuleLog=INFO,ruleLog +log4j.additivity.com.bigdata.relation.rule.eval.RuleLog=false +log4j.appender.ruleLog=org.apache.log4j.FileAppender +log4j.appender.ruleLog.Threshold=ALL +log4j.appender.ruleLog.File=rules.log +log4j.appender.ruleLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +log4j.appender.ruleLog.BufferedIO=false +log4j.appender.ruleLog.layout=org.apache.log4j.PatternLayout +log4j.appender.ruleLog.layout.ConversionPattern=%m + +## +# Summary query evaluation log (tab delimited file). Uncomment the next line to enable. +#log4j.logger.com.bigdata.bop.engine.QueryLog=INFO,queryLog +log4j.additivity.com.bigdata.bop.engine.QueryLog=false +log4j.appender.queryLog=org.apache.log4j.FileAppender +log4j.appender.queryLog.Threshold=ALL +log4j.appender.queryLog.File=queryLog.csv +log4j.appender.queryLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +log4j.appender.queryLog.BufferedIO=false +log4j.appender.queryLog.layout=org.apache.log4j.PatternLayout +log4j.appender.queryLog.layout.ConversionPattern=%m + +## +# BOp run state trace (tab delimited file). Uncomment the next line to enable. +#log4j.logger.com.bigdata.bop.engine.RunState$TableLog=INFO,queryRunStateLog +log4j.additivity.com.bigdata.bop.engine.RunState$TableLog=false +log4j.appender.queryRunStateLog=org.apache.log4j.FileAppender +log4j.appender.queryRunStateLog.Threshold=ALL +log4j.appender.queryRunStateLog.File=queryRunState.log +log4j.appender.queryRunStateLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +log4j.appender.queryRunStateLog.BufferedIO=false +log4j.appender.queryRunStateLog.layout=org.apache.log4j.PatternLayout +log4j.appender.queryRunStateLog.layout.ConversionPattern=%m Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/log4j.properties ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-04-15 21:12:47
|
Revision: 4408 http://bigdata.svn.sourceforge.net/bigdata/?rev=4408&view=rev Author: thompsonbry Date: 2011-04-15 21:12:41 +0000 (Fri, 15 Apr 2011) Log Message: ----------- Adding the " Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/build.xml Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-war/RWStore.properties branches/QUADS_QUERY_BRANCH/bigdata-war/WEB-INF/ branches/QUADS_QUERY_BRANCH/bigdata-war/WEB-INF/web.xml branches/QUADS_QUERY_BRANCH/bigdata-war/classes/ branches/QUADS_QUERY_BRANCH/bigdata-war/classes/log4j.properties Added: branches/QUADS_QUERY_BRANCH/bigdata-war/RWStore.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/RWStore.properties (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/RWStore.properties 2011-04-15 21:12:41 UTC (rev 4408) @@ -0,0 +1,30 @@ +# +# Note: These options are applied when the journal and the triple store are +# first created. + +## +## Journal options. +## + +# The backing file. +com.bigdata.journal.AbstractJournal.file=bigdata.jnl + +# The persistence engine. Use 'Disk' for the WORM or 'DiskRW' for the RWStore. +com.bigdata.journal.AbstractJournal.bufferMode=DiskRW + +com.bigdata.btree.writeRetentionQueue.capacity=4000 +com.bigdata.btree.BTree.branchingFactor=128 + +# 200M initial extent. +com.bigdata.journal.AbstractJournal.initialExtent=209715200 +com.bigdata.journal.AbstractJournal.maximumExtent=209715200 + +## +## Setup for QUADS mode without the full text index. +## +com.bigdata.rdf.sail.truthMaintenance=false +com.bigdata.rdf.store.AbstractTripleStore.quads=false +com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=false +com.bigdata.rdf.store.AbstractTripleStore.textIndex=false +com.bigdata.rdf.store.AbstractTripleStore.axiomsClass=com.bigdata.rdf.axioms.NoAxioms +#com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=true Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-war/RWStore.properties ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/bigdata-war/WEB-INF/web.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/WEB-INF/web.xml (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/WEB-INF/web.xml 2011-04-15 21:12:41 UTC (rev 4408) @@ -0,0 +1,68 @@ +<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> +<web-app> + <display-name>Bigdata</display-name> + <description>Bigdata</description> + <context-param> + <param-name>property-file</param-name> + <param-value>bigdata/RWStore.properties</param-value> + <description>The property file (for a standalone database instance) or the + jini configuration file (for a federation). The file MUST end with either + ".properties" or ".config". When deploying a web application, the bundled + property files are located in the root of the "bigdata" WAR and are located + as "bigdata/RWStore.properties", etc.</description> + </context-param> + <context-param> + <param-name>namespace</param-name> + <param-value>kb</param-value> + <description>The default bigdata namespace of for the triple or quad store + instance to be exposed.</description> + </context-param> + <context-param> + <param-name>create</param-name> + <param-value>true</param-value> + <description>When true a new triple or quads store instance will be created + if none is found at that namespace.</description> + </context-param> + <context-param> + <param-name>query-thread-pool-size</param-name> + <param-value>16</param-value> + <description>The size of the thread pool used to service SPARQL queries -OR- + ZERO (0) for an unbounded thread pool.</description> + </context-param> + <listener> + <listener-class>com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener</listener-class> + </listener> + <servlet> + <servlet-name>REST API</servlet-name> + <display-name>REST API</display-name> + <description>The REST API, including a SPARQL end point, as described at + https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer + </description> + <servlet-class>com.bigdata.rdf.sail.webapp.RESTServlet</servlet-class> + <load-on-startup>0</load-on-startup> + </servlet> + <servlet> + <servlet-name>Status</servlet-name> + <display-name>Status</display-name> + <description>A status page.</description> + <servlet-class>com.bigdata.rdf.sail.webapp.StatusServlet</servlet-class> + </servlet> + <servlet> + <servlet-name>Counters</servlet-name> + <display-name>Performance counters</display-name> + <description>Performance counters.</description> + <servlet-class>com.bigdata.rdf.sail.webapp.CountersServlet</servlet-class> + </servlet> + <servlet-mapping> + <servlet-name>REST API</servlet-name> + <url-pattern>/bigdata</url-pattern> + </servlet-mapping> + <servlet-mapping> + <servlet-name>Status</servlet-name> + <url-pattern>/bigdata/status</url-pattern> + </servlet-mapping> + <servlet-mapping> + <servlet-name>Counters</servlet-name> + <url-pattern>/bigdata/counters</url-pattern> + </servlet-mapping> +</web-app> Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-war/WEB-INF/web.xml ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/bigdata-war/classes/log4j.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/classes/log4j.properties (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/classes/log4j.properties 2011-04-15 21:12:41 UTC (rev 4408) @@ -0,0 +1,80 @@ +# Default log4j configuration. See the individual classes for the +# specific loggers, but generally they are named for the class in +# which they are defined. + +# Default log4j configuration for testing purposes. +# +# You probably want to set the default log level to ERROR. +# +log4j.rootCategory=WARN, dest1 +#log4j.rootCategory=WARN, dest2 + +# Loggers. +# Note: logging here at INFO or DEBUG will significantly impact throughput! +log4j.logger.com.bigdata=WARN +log4j.logger.com.bigdata.btree=WARN + +# Normal data loader (single threaded). +#log4j.logger.com.bigdata.rdf.store.DataLoader=INFO + +# dest1 +log4j.appender.dest1=org.apache.log4j.ConsoleAppender +log4j.appender.dest1.layout=org.apache.log4j.PatternLayout +log4j.appender.dest1.layout.ConversionPattern=%-5p: %F:%L: %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-5p: %r %l: %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-5p: %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n +#log4j.appender.dest1.layout.ConversionPattern=%-4r(%d) [%t] %-5p %c(%l:%M) %x - %m%n + +# dest2 includes the thread name and elapsed milliseconds. +# Note: %r is elapsed milliseconds. +# Note: %t is the thread name. +# See http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html +log4j.appender.dest2=org.apache.log4j.ConsoleAppender +log4j.appender.dest2.layout=org.apache.log4j.PatternLayout +log4j.appender.dest2.layout.ConversionPattern=%-5p: %r %X{hostname} %X{serviceUUID} %X{taskname} %X{timestamp} %X{resources} %t %l: %m%n + +## +# Rule execution log. This is a formatted log file (comma delimited). +log4j.logger.com.bigdata.relation.rule.eval.RuleLog=INFO,ruleLog +log4j.additivity.com.bigdata.relation.rule.eval.RuleLog=false +log4j.appender.ruleLog=org.apache.log4j.FileAppender +log4j.appender.ruleLog.Threshold=ALL +log4j.appender.ruleLog.File=rules.log +log4j.appender.ruleLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +log4j.appender.ruleLog.BufferedIO=false +log4j.appender.ruleLog.layout=org.apache.log4j.PatternLayout +log4j.appender.ruleLog.layout.ConversionPattern=%m + +## +# Summary query evaluation log (tab delimited file). Uncomment the next line to enable. +#log4j.logger.com.bigdata.bop.engine.QueryLog=INFO,queryLog +log4j.additivity.com.bigdata.bop.engine.QueryLog=false +log4j.appender.queryLog=org.apache.log4j.FileAppender +log4j.appender.queryLog.Threshold=ALL +log4j.appender.queryLog.File=queryLog.csv +log4j.appender.queryLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +log4j.appender.queryLog.BufferedIO=false +log4j.appender.queryLog.layout=org.apache.log4j.PatternLayout +log4j.appender.queryLog.layout.ConversionPattern=%m + +## +# BOp run state trace (tab delimited file). Uncomment the next line to enable. +#log4j.logger.com.bigdata.bop.engine.RunState$TableLog=INFO,queryRunStateLog +log4j.additivity.com.bigdata.bop.engine.RunState$TableLog=false +log4j.appender.queryRunStateLog=org.apache.log4j.FileAppender +log4j.appender.queryRunStateLog.Threshold=ALL +log4j.appender.queryRunStateLog.File=queryRunState.log +log4j.appender.queryRunStateLog.Append=true +# I find that it is nicer to have this unbuffered since you can see what +# is going on and to make sure that I have complete rule evaluation logs +# on shutdown. +log4j.appender.queryRunStateLog.BufferedIO=false +log4j.appender.queryRunStateLog.layout=org.apache.log4j.PatternLayout +log4j.appender.queryRunStateLog.layout.ConversionPattern=%m Property changes on: branches/QUADS_QUERY_BRANCH/bigdata-war/classes/log4j.properties ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-04-15 21:02:31 UTC (rev 4407) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-04-15 21:12:41 UTC (rev 4408) @@ -116,7 +116,7 @@ <!-- Builds the bigdata JAR and bundles it together with all of its dependencies in the ${build.dir}/lib directory. --> <target name="bundleJar" depends="clean, bundle, jar" description="Builds the bigdata JAR and bundles it together with all of its dependencies in the ${build.dir}/lib directory."> - <copy file="${build.dir}/${version}.jar" todir="${build.dir}/lib" /> + <copy file="${build.dir}/${version}.jar" todir="${build.dir}/lib"/> <!--<property name="myclasspath" refid="runtime.classpath" /> <echo message="${myclasspath}"/>--> </target> @@ -132,9 +132,6 @@ </jar> </target> - - - <!-- This generates an osgi bundle jar, and does not bundled the dependencies. See 'bundleJar'. --> <target name="osgi" depends="compile, bundle" description="Generates the osgi bundle jar (see also bundleJar)."> @@ -158,8 +155,8 @@ </jar> <bnd output="${build.dir}/bundles/com.bigata-${osgi.version}.jar" classpath="${build.dir}/classes" eclipse="false" failok="false" exceptions="true" files="${basedir}/osgi/bigdata.bnd" /> - <bndwrap jars="${build.dir}/lib/unimi/colt-1.2.0.jar" output="${build.dir}/bundles/colt-1.2.0.jar" definitions="${basedir}/osgi/" /> - <bndwrap jars="${build.dir}/lib/unimi/fastutil-5.1.5.jar" output="${build.dir}/bundles/fastutil-5.1.5.jar" definitions="${basedir}/osgi/" /> + <bndwrap jars="${build.dir}/lib/colt-1.2.0.jar" output="${build.dir}/bundles/colt-1.2.0.jar" definitions="${basedir}/osgi/" /> + <bndwrap jars="${build.dir}/lib/fastutil-5.1.5.jar" output="${build.dir}/bundles/fastutil-5.1.5.jar" definitions="${basedir}/osgi/" /> <bndwrap jars="${build.dir}/lib/cweb-commons-1.1-b2-dev.jar" output="${build.dir}/bundles/cweb-commons-1.1.2.jar" definitions="${basedir}/osgi/" /> <bndwrap jars="${build.dir}/lib/cweb-extser-0.1-b2-dev.jar" output="${build.dir}/bundles/cweb-extser-1.1.2.jar" definitions="${basedir}/osgi/" /> <bndwrap jars="${build.dir}/lib/dsi-utils-1.0.6-020610.jar" output="${build.dir}/bundles/dsi-utils-1.0.6-020610.jar" definitions="${basedir}/osgi/" /> @@ -202,7 +199,7 @@ </target> <target name="bundle" description="Bundles all dependencies for easier deployments and releases (does not bundle the bigdata jar)."> -<copy toDir="${build.dir}/lib"> +<copy toDir="${build.dir}/lib" flatten="true"> <fileset dir="${bigdata.dir}/bigdata/lib"> <include name="**/*.jar" /> <include name="**/*.so" /> @@ -226,6 +223,53 @@ </copy> </target> + <!--depends="bundleJar"--> + <!-- + TODO 13M of the resulting WAR is fastutils. We need to slim down that JAR to + just the classes that we actually use before deploying it. + --> + <target name="war" + description="Generates a WAR artifact."> + <delete file="${build.dir}/bigdata.war"/> + <echo message="Building WAR"/> + <war destfile="${build.dir}/bigdata.war" + webxml="bigdata-war/src/resources/WEB-INF/web.xml" + > + <fileset dir="bigdata-war/src/html"/> + <fileset dir="bigdata-war/src/jsp"/> + <fileset dir="bigdata-war/src/images"/> + <file file="bigdata-war/src/resources/RWStore.properties"/> + <lib dir="${build.dir}/lib"> + <exclude name="iris*.jar"/> + <exclude name="jgrapht*.jar"/> + <exclude name="tuprolog*.jar"/> + <exclude name="sesame*testsuite*.jar"/> + <exclude name="bnd*.jar"/> + <exclude name="jetty*.jar"/> + <exclude name="servlet-api*.jar"/> + <exclude name="zookeeper*.jar"/> + <!-- jini --> + <exclude name="jini*.jar"/> + <exclude name="jsk*.jar"/> + <exclude name="mahalo*.jar"/> + <exclude name="mercury*.jar"/> + <exclude name="norm*.jar"/> + <exclude name="outrigger*.jar"/> + <exclude name="phoenix*.jar"/> + <exclude name="reggie*.jar"/> + <exclude name="sdm*.jar"/> + <exclude name="start*.jar"/> + <exclude name="sun-util*.jar"/> + <exclude name="tools*.jar"/> + <exclude name="browser*.jar"/> + <exclude name="classserver*.jar"/> + <exclude name="fiddler*.jar"/> + <exclude name="group*.jar"/> + </lib> + <classes file="bigdata-war/src/resources/log4j.properties"/> + </war> + </target> + <target name="release-prepare" depends="jar, bundle, javadoc" description="create a release."> <!-- The source tree. --> <copy toDir="${build.dir}/bigdata/src"> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-04-17 20:11:51
|
Revision: 4410 http://bigdata.svn.sourceforge.net/bigdata/?rev=4410&view=rev Author: thompsonbry Date: 2011-04-17 20:11:44 +0000 (Sun, 17 Apr 2011) Log Message: ----------- Modified build.xml to exclude some more jars which do not need to be bundled with the WAR. Modified build.xml to generate a WAR with a reduced fastutils jar. This reduces the size of the WAR by nearly 1/2. Added autojar as a build tool. This is being used to trim the fastutil jar down to just those classes we actually use. Bug fix to the servlet context listener to close the journal if it opened it. Added comment to RWStore.properties for the webapp concerning the location of the journal file. Added alternative location for RWStore.properties for use with WebAppUnassembled for testing and comments. Added a public method to permit the service for SynchronizedHardReferenceQueueWithTimeout to be shutdown and added code to invoke that method from the servlet context listener so we will stop generating complaints about that thread in tomcat. Added logic to NonBlockingLockManagerWithNewDesign to wait until the backing service is terminated. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml branches/QUADS_QUERY_BRANCH/build.xml Added Paths: ----------- branches/QUADS_QUERY_BRANCH/src/build/ branches/QUADS_QUERY_BRANCH/src/build/README.txt branches/QUADS_QUERY_BRANCH/src/build/autojar/ branches/QUADS_QUERY_BRANCH/src/build/autojar/README.txt branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar-license.txt branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar.jar Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java 2011-04-16 11:15:48 UTC (rev 4409) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java 2011-04-17 20:11:44 UTC (rev 4410) @@ -58,10 +58,8 @@ public class SynchronizedHardReferenceQueueWithTimeout<T> implements IHardReferenceQueue<T> { - protected static final Logger log = Logger.getLogger(SynchronizedHardReferenceQueueWithTimeout.class); + private static final Logger log = Logger.getLogger(SynchronizedHardReferenceQueueWithTimeout.class); - protected static final boolean DEBUG = log.isDebugEnabled(); - /** * Note: Synchronization for the inner {@link #queue} is realized using the * <strong>outer</strong> reference! @@ -202,7 +200,7 @@ } - if (DEBUG && ncleared > 3) + if (log.isDebugEnabled() && ncleared > 3) log.debug("#ncleared=" + ncleared + ", size=" + size() + ", timeout=" + TimeUnit.NANOSECONDS.toMillis(timeout) + ", maxAge=" + TimeUnit.NANOSECONDS.toMillis(maxAge) @@ -338,6 +336,21 @@ /* * Cleaner service. */ + + /** + * This method may be invoked by life cycle operations which need to tear + * down the bigdata environment. Normally you do not need to do this as the + * cleaner service uses a daemon thread and will not prevent the JVM from + * halting. However, a servlet container running bigdata can complain that + * some threads were not terminated if the webapp running bigdata is + * stopped. You can invoke this method to terminate the stale reference + * cleaner thread. + */ + public static final void stopStaleReferenceCleaner() { + + cleanerService.shutdownNow(); + + } private static final ScheduledExecutorService cleanerService; static { Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java 2011-04-16 11:15:48 UTC (rev 4409) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java 2011-04-17 20:11:44 UTC (rev 4410) @@ -848,6 +848,17 @@ } + try { + // wait for the service to terminate. + if (log.isInfoEnabled()) + log.info("Waiting for service shutdown."); + service.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS); + if (log.isInfoEnabled()) + log.info("Service is shutdown."); + } catch (InterruptedException e) { + throw new RuntimeException(e); + } + } public void shutdownNow() { Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-04-16 11:15:48 UTC (rev 4409) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-04-17 20:11:44 UTC (rev 4410) @@ -42,6 +42,7 @@ import org.apache.log4j.Logger; import com.bigdata.Banner; +import com.bigdata.cache.SynchronizedHardReferenceQueueWithTimeout; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.ITransactionService; import com.bigdata.journal.ITx; @@ -151,7 +152,7 @@ indexManager = openIndexManager(propertyFile); // we are responsible for the life cycle. - closeIndexManager = false; + closeIndexManager = true; } @@ -365,6 +366,18 @@ } + /* + * Terminate various threads which should no longer be executing once we + * have destroyed the servlet context. If you do not do this then + * servlet containers such as tomcat will complain that we did not stop + * some threads. + */ + { + + SynchronizedHardReferenceQueueWithTimeout.stopStaleReferenceCleaner(); + + } + } /** Modified: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties 2011-04-16 11:15:48 UTC (rev 4409) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/RWStore.properties 2011-04-17 20:11:44 UTC (rev 4410) @@ -6,7 +6,9 @@ ## Journal options. ## -# The backing file. +# The backing file. This contains all your data. You want to put this someplace +# safe. The default locator will wind up in the directory from which you start +# your servlet container. com.bigdata.journal.AbstractJournal.file=bigdata.jnl # The persistence engine. Use 'Disk' for the WORM or 'DiskRW' for the RWStore. Modified: branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml 2011-04-16 11:15:48 UTC (rev 4409) +++ branches/QUADS_QUERY_BRANCH/bigdata-war/src/resources/WEB-INF/web.xml 2011-04-17 20:11:44 UTC (rev 4410) @@ -3,7 +3,12 @@ <display-name>Bigdata</display-name> <description>Bigdata</description> <context-param> + <!-- Note: This path is relative to the directory in which you start --> + <!-- the servlet container. --> <param-name>property-file</param-name> + <!-- Note: This path may be used with WebAppUnassembled for testing. + <param-value>bigdata-war/src/resources/RWStore.properties</param-value> + --> <param-value>../webapps/bigdata/RWStore.properties</param-value> <description>The property file (for a standalone database instance) or the jini configuration file (for a federation). The file MUST end with either Modified: branches/QUADS_QUERY_BRANCH/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/build.xml 2011-04-16 11:15:48 UTC (rev 4409) +++ branches/QUADS_QUERY_BRANCH/build.xml 2011-04-17 20:11:44 UTC (rev 4410) @@ -223,12 +223,35 @@ </copy> </target> + <!-- + This target produces a new jar which includes everything from the bigdata + jar, the dsi-util jar, the lgpl-utils jar, and exactly those class files + from colt and fastutil which are required by the proceeding jars. The + main advantage of the resulting jar is that the vast majority of fastutil + is not necessary, and it is a 13M jar. + --> + <target name="autojar" + description="Produce an expanded version of the bigdata jar which + includes the data from the dsi-util and lgpl-utils jars and only + those classes from fastutil and colt which are required to support + bigdata and dsiutil at runtime."> + <java jar="src/build/autojar/autojar.jar" fork="true" failonerror="true"> + <arg line="-o ${build.dir}/bigdataPlus.jar + -c ${bigdata.dir}/bigdata/lib/unimi/fastutil*.jar + -c ${bigdata.dir}/bigdata/lib/unimi/colt*.jar + ${build.dir}/lib/bigdata*.jar + ${bigdata.dir}/bigdata/lib/dsi-util*.jar + ${bigdata.dir}/bigdata/lib/lgpl-utils*.jar + " /> + </java> + </target> + <!--depends="bundleJar"--> <!-- TODO 13M of the resulting WAR is fastutils. We need to slim down that JAR to just the classes that we actually use before deploying it. --> - <target name="war" + <target name="war" depends="autojar" description="Generates a WAR artifact."> <delete file="${build.dir}/bigdata.war"/> <echo message="Building WAR"/> @@ -239,16 +262,31 @@ <fileset dir="bigdata-war/src/jsp"/> <fileset dir="bigdata-war/src/images"/> <file file="bigdata-war/src/resources/RWStore.properties"/> + <!-- bigdata jar plus some dependencies as filtered by autojar. --> + <lib file="${build.dir}/bigdataPlus.jar"/> <lib dir="${build.dir}/lib"> + <!-- jars bundled into "bigdata-plus" by autojar. --> + <exclude name="fastutil*.jar"/> + <exclude name="colt*.jar"/> + <exclude name="dsi-util*.jar"/> + <exclude name="lgpl-utils*.jar"/> + <exclude name="bigdata*.jar"/> + <!-- jars which are not currently used. --> <exclude name="iris*.jar"/> <exclude name="jgrapht*.jar"/> <exclude name="tuprolog*.jar"/> + <!-- test suite stuff is not needed. --> + <exclude name="junit*.jar"/> + <exclude name="cweb-junit*.jar"/> <exclude name="sesame*testsuite*.jar"/> + <!-- osgi stuff is not needed. --> <exclude name="bnd*.jar"/> + <!-- jetty / servlet API not required for the WAR. --> <exclude name="jetty*.jar"/> <exclude name="servlet-api*.jar"/> + <!-- zookeeper only used in scale-out. --> <exclude name="zookeeper*.jar"/> - <!-- jini --> + <!-- jini only used in scale-out. --> <exclude name="jini*.jar"/> <exclude name="jsk*.jar"/> <exclude name="mahalo*.jar"/> Added: branches/QUADS_QUERY_BRANCH/src/build/README.txt =================================================================== --- branches/QUADS_QUERY_BRANCH/src/build/README.txt (rev 0) +++ branches/QUADS_QUERY_BRANCH/src/build/README.txt 2011-04-17 20:11:44 UTC (rev 4410) @@ -0,0 +1,3 @@ +This directory contains resources which are solely used to support the build +of artifacts. They are NOT bundled with bigdata distributions and are NOT +required to run bigdata. \ No newline at end of file Property changes on: branches/QUADS_QUERY_BRANCH/src/build/README.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/src/build/autojar/README.txt =================================================================== --- branches/QUADS_QUERY_BRANCH/src/build/autojar/README.txt (rev 0) +++ branches/QUADS_QUERY_BRANCH/src/build/autojar/README.txt 2011-04-17 20:11:44 UTC (rev 4410) @@ -0,0 +1,7 @@ +autojar provides a means of filtering a jar file to include only those +resources actually used by the application. It is used, as recommended, +to reduce the size of the fastutil dependency down to just those class +files actually used by bigdata. autojar itself is not required to run +bigdata and is NOT bundled with bigdata distributions. + +See http://autojar.sourceforge.net/en_d/index.html Property changes on: branches/QUADS_QUERY_BRANCH/src/build/autojar/README.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar-license.txt =================================================================== --- branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar-license.txt (rev 0) +++ branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar-license.txt 2011-04-17 20:11:44 UTC (rev 4410) @@ -0,0 +1,340 @@ + GNU GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1989, 1991 Free Software Foundation, Inc. + 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +License is intended to guarantee your freedom to share and change free +software--to make sure the software is free for all its users. This +General Public License applies to most of the Free Software +Foundation's software and to any other program whose authors commit to +using it. (Some other Free Software Foundation software is covered by +the GNU Library General Public License instead.) You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must give the recipients all the rights that +you have. You must make sure that they, too, receive or can get the +source code. And you must show them these terms so they know their +rights. + + We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + + Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that redistributors of a free +program will individually obtain patent licenses, in effect making the +program proprietary. To prevent this, we have made it clear that any +patent must be licensed for everyone's free use or not licensed at all. + + The precise terms and conditions for copying, distribution and +modification follow. + + GNU GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License applies to any program or other work which contains +a notice placed by the copyright holder saying it may be distributed +under the terms of this General Public License. The "Program", below, +refers to any such program or work, and a "work based on the Program" +means either the Program or any derivative work under copyright law: +that is to say, a work containing the Program or a portion of it, +either verbatim or with modifications and/or translated into another +language. (Hereinafter, translation is included without limitation in +the term "modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running the Program is not restricted, and the output from the Program +is covered only if its contents constitute a work based on the +Program (independent of having been made by running the Program). +Whether that is true depends on what the Program does. + + 1. You may copy and distribute verbatim copies of the Program's +source code as you receive it, in any medium, provided that you +conspicuously and appropriately publish on each copy an appropriate +copyright notice and disclaimer of warranty; keep intact all the +notices that refer to this License and to the absence of any warranty; +and give any other recipients of the Program a copy of this License +along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + + 2. You may modify your copy or copies of the Program or any portion +of it, thus forming a work based on the Program, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any + part thereof, to be licensed as a whole at no charge to all third + parties under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a + notice that there is no warranty (or else, saying that you provide + a warranty) and that users may redistribute the program under + these conditions, and telling the user how to view a copy of this + License. (Exception: if the Program itself is interactive but + does not normally print such an announcement, your work based on + the Program is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Program, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections + 1 and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your + cost of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer + to distribute corresponding source code. (This alternative is + allowed only for noncommercial distribution and only if you + received the program in object code or executable form with such + an offer, in accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source +code means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to +control compilation and installation of the executable. However, as a +special exception, the source code distributed need not include +anything that is normally distributed (in either source or binary +form) with the major components (compiler, kernel, and so on) of the +operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering +access to copy from a designated place, then offering equivalent +access to copy the source code from the same place counts as +distribution of the source code, even though third parties are not +compelled to copy the source along with the object code. + + 4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt +otherwise to copy, modify, sublicense or distribute the Program is +void, and will automatically terminate your rights under this License. +However, parties who have received copies, or rights, from you under +this License will not have their licenses terminated so long as such +parties remain in full compliance. + + 5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + + 6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Program at all. For example, if a patent +license would not permit royalty-free redistribution of the Program by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License +may add an explicit geographical distribution limitation excluding +those countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 9. The Free Software Foundation may publish revised and/or new versions +of the General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and conditions +either of that version or of any later version published by the Free +Software Foundation. If the Program does not specify a version number of +this License, you may choose any version ever published by the Free Software +Foundation. + + 10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the author +to ask for permission. For software which is copyrighted by the Free +Software Foundation, write to the Free Software Foundation; we sometimes +make exceptions for this. Our decision will be guided by the two goals +of preserving the free status of all derivatives of our free software and +of promoting the sharing and reuse of software generally. + + NO WARRANTY + + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, +REPAIR OR CORRECTION. + + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + <one line to give the program's name and a brief idea of what it does.> + Copyright (C) <year> <name of author> + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may +be called something other than `show w' and `show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + `Gnomovision' (which makes passes at compilers) written by James Hacker. + + <signature of Ty Coon>, 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Library General +Public License instead of this License. Property changes on: branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar-license.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar.jar =================================================================== (Binary files differ) Property changes on: branches/QUADS_QUERY_BRANCH/src/build/autojar/autojar.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-05-09 12:48:05
|
Revision: 4465 http://bigdata.svn.sourceforge.net/bigdata/?rev=4465&view=rev Author: thompsonbry Date: 2011-05-09 12:47:59 +0000 (Mon, 09 May 2011) Log Message: ----------- Bug fix to BigdataSailConnection for improper rollback of full transactions. The code was invoking super.rollback() which caused a database (Journal) level abort() in addition to peforming a transaction level abort. The journal level abort was causing updates against the unisolated lexicon indices to be lost and was breaking the eventually consistent contract for TERM2ID and ID2TERM. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2011-05-08 14:18:34 UTC (rev 4464) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2011-05-09 12:47:59 UTC (rev 4465) @@ -2286,7 +2286,7 @@ * Or perhaps this can be rolled into the {@link ValueFactory} impl * along with the reverse bnodes mapping? */ - private ConcurrentWeakValueCacheWithBatchedUpdates<IV, BigdataValue> termCache; + final private ConcurrentWeakValueCacheWithBatchedUpdates<IV, BigdataValue> termCache; /** * Factory used for {@link #termCache} for read-only views of the lexicon. @@ -2431,11 +2431,18 @@ * {@link BigdataValue} for that term identifier in the lexicon. */ final public BigdataValue getTerm(final IV iv) { - + +// if (false) { // alternative forces the standard code path. +// final Collection<IV> ivs = new LinkedList<IV>(); +// ivs.add(iv); +// final Map<IV, BigdataValue> values = getTerms(ivs); +// return values.get(iv); +// } + if (iv.isInline()) return iv.asValue(this); - TermId tid = (TermId) iv; + final TermId tid = (TermId) iv; // handle NULL, bnodes, statement identifiers, and the termCache. BigdataValue value = _getTermId(tid); @@ -2531,7 +2538,7 @@ * @return * the term identifier for the value */ - private TermId getTermId(Value value) { + private TermId getTermId(final Value value) { final IIndex ndx = getTerm2IdIndex(); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-05-08 14:18:34 UTC (rev 4464) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-05-09 12:47:59 UTC (rev 4465) @@ -1582,7 +1582,15 @@ @Override public synchronized void rollback() throws SailException { - super.rollback(); + /* + * Note: DO NOT invoke super.rollback(). That will cause a + * database (Journal) level abort(). The Journal level abort() + * will discard the writes buffered on the unisolated indices + * (the lexicon indices). That will cause lost updates and break + * the eventually consistent design for the TERM2ID and ID2TERM + * indices. + */ +// super.rollback(); try { @@ -3498,7 +3506,7 @@ * native joins and the BigdataEvaluationStatistics rely on * this. */ - Object[] newVals = replaceValues(dataset, tupleExpr, bindings); + final Object[] newVals = replaceValues(dataset, tupleExpr, bindings); dataset = (Dataset) newVals[0]; bindings = (BindingSet) newVals[1]; @@ -3542,8 +3550,6 @@ } catch (QueryEvaluationException e) { -// log.error("Remove log stmt"+e,e);// FIXME remove this - I am just looking for the root cause of something in the SAIL. - throw new SailException(e); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-05-09 13:30:31
|
Revision: 4466 http://bigdata.svn.sourceforge.net/bigdata/?rev=4466&view=rev Author: thompsonbry Date: 2011-05-09 13:30:21 +0000 (Mon, 09 May 2011) Log Message: ----------- Derived a transactional and non-transactional version of TestRollbacks and incorporated both test classes into each of the proxy test suites (triples, sids, quads). Javadoc comment in LexiconRelation. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacks.java Added Paths: ----------- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacksTx.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2011-05-09 12:47:59 UTC (rev 4465) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/LexiconRelation.java 2011-05-09 13:30:21 UTC (rev 4466) @@ -741,6 +741,11 @@ * full tx isolation. This is because we use an * eventually consistent strategy to write on the * lexicon indices. + * + * Note: It appears that we have already ensured that + * we will be using the unisolated view of the lexicon + * relation in AbstractTripleStore#getLexiconRelation() + * so this code path should not be evaluated. */ term2id = AbstractRelation .getIndex(getIndexManager(), @@ -781,6 +786,11 @@ * full tx isolation. This is because we use an * eventually consistent strategy to write on the * lexicon indices. + * + * Note: It appears that we have already ensured that + * we will be using the unisolated view of the lexicon + * relation in AbstractTripleStore#getLexiconRelation() + * so this code path should not be evaluated. */ id2term = AbstractRelation .getIndex(getIndexManager(), Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-05-09 12:47:59 UTC (rev 4465) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-05-09 13:30:21 UTC (rev 4466) @@ -112,6 +112,7 @@ suite.addTestSuite(TestTxCreate.class); suite.addTestSuite(com.bigdata.rdf.sail.contrib.TestRollbacks.class); + suite.addTestSuite(com.bigdata.rdf.sail.contrib.TestRollbacksTx.class); // The Sesame TCK, including the SPARQL test suite. { Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-05-09 12:47:59 UTC (rev 4465) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-05-09 13:30:21 UTC (rev 4466) @@ -90,7 +90,10 @@ suite.addTestSuite(TestTxCreate.class); - return suite; + suite.addTestSuite(com.bigdata.rdf.sail.contrib.TestRollbacks.class); + suite.addTestSuite(com.bigdata.rdf.sail.contrib.TestRollbacksTx.class); + + return suite; } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-05-09 12:47:59 UTC (rev 4465) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-05-09 13:30:21 UTC (rev 4466) @@ -86,6 +86,9 @@ suite.addTestSuite(TestTxCreate.class); + suite.addTestSuite(com.bigdata.rdf.sail.contrib.TestRollbacks.class); + suite.addTestSuite(com.bigdata.rdf.sail.contrib.TestRollbacksTx.class); + return suite; } Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacks.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacks.java 2011-05-09 12:47:59 UTC (rev 4465) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacks.java 2011-05-09 13:30:21 UTC (rev 4466) @@ -103,8 +103,10 @@ NoVocabulary.class.getName()); props.setProperty(BigdataSail.Options.TRUTH_MAINTENANCE, "false"); props.setProperty(BigdataSail.Options.JUSTIFY, "false"); - props.setProperty(BigdataSail.Options.ISOLATABLE_INDICES, "true"); + // transactions are off in the base version of this class. + props.setProperty(BigdataSail.Options.ISOLATABLE_INDICES, "false"); + // props.setProperty(BigdataSail.Options.CREATE_TEMP_FILE, "true"); // props.setProperty(BigdataSail.Options.BUFFER_MODE, BufferMode.DiskRW // .toString()); Added: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacksTx.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacksTx.java (rev 0) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/contrib/TestRollbacksTx.java 2011-05-09 13:30:21 UTC (rev 4466) @@ -0,0 +1,29 @@ +package com.bigdata.rdf.sail.contrib; + +import java.util.Properties; + +import com.bigdata.rdf.sail.BigdataSail; + +public class TestRollbacksTx extends TestRollbacks { + + public TestRollbacksTx() { + super(); + } + + public TestRollbacksTx(String name) { + super(name); + } + + @Override + public Properties getProperties() { + + final Properties props = super.getProperties(); + + // transactions are ON in this version of this class. + props.setProperty(BigdataSail.Options.ISOLATABLE_INDICES, "true"); + + return props; + + } + +} This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-05-15 17:46:45
|
Revision: 4502 http://bigdata.svn.sourceforge.net/bigdata/?rev=4502&view=rev Author: thompsonbry Date: 2011-05-15 17:46:39 +0000 (Sun, 15 May 2011) Log Message: ----------- Removed reference to the older version of the NanoSparqlServer (in the sail.bench package). Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/log4j.properties branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/build.xml branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/resources/logging/log4j.properties branches/QUADS_QUERY_BRANCH/src/resources/scripts/nanoSparqlServer.sh Modified: branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/log4j.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/log4j.properties 2011-05-15 17:46:23 UTC (rev 4501) +++ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/log4j.properties 2011-05-15 17:46:39 UTC (rev 4502) @@ -20,7 +20,7 @@ #log4j.logger.com.bigdata.cache.BCHMGlobalLRU2=TRACE -#log4j.logger.com.bigdata.rdf.sail.bench.NanoSparqlServer=INFO +#log4j.logger.com.bigdata.rdf.sail.webapp.NanoSparqlServer=INFO log4j.logger.com.bigdata.relation.accesspath.BlockingBuffer=ERROR log4j.logger.com.bigdata.service.ResourceService=INFO Modified: branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/build.xml =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/build.xml 2011-05-15 17:46:23 UTC (rev 4501) +++ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/build.xml 2011-05-15 17:46:39 UTC (rev 4502) @@ -40,7 +40,7 @@ </target> <target name="start-nano-server" depends="prepare" description="Start a small http server fronting for a bigdata database instance."> - <java classname="com.bigdata.rdf.sail.bench.NanoSparqlServer" fork="true" failonerror="true"> + <java classname="com.bigdata.rdf.sail.webapp.NanoSparqlServer" fork="true" failonerror="true"> <arg line="${nanoServerPort} ${namespace} ${journalPropertyFile}" /> <!-- specify/override the journal file name. --> <jvmarg line="${queryJvmArgs} @@ -52,15 +52,4 @@ </java> </target> - <!-- - <target name="stop-nano-server" depends="prepare" description="Stop the small http server running at the configured port."> - <java classname="com.bigdata.rdf.sail.bench.NanoSparqlServer" fork="true" failonerror="true"> - <arg line="${nanoServerPort} -stop" /> - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - --> - </project> Modified: branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/resources/logging/log4j.properties =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/resources/logging/log4j.properties 2011-05-15 17:46:23 UTC (rev 4501) +++ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/resources/logging/log4j.properties 2011-05-15 17:46:39 UTC (rev 4502) @@ -17,7 +17,7 @@ #log4j.logger.com.bigdata.rdf.sail.BigdataSail=INFO #log4j.logger.com.bigdata.rdf.sail.BigdataEvaluationStrategyImpl2=INFO -#log4j.logger.com.bigdata.rdf.sail.bench.NanoSparqlServer=INFO +#log4j.logger.com.bigdata.rdf.sail.webapp.NanoSparqlServer=INFO # My Stuff #log4j.logger.com.bigdata.rdf.sail.BigdataSail=DEBUG Modified: branches/QUADS_QUERY_BRANCH/src/resources/scripts/nanoSparqlServer.sh =================================================================== --- branches/QUADS_QUERY_BRANCH/src/resources/scripts/nanoSparqlServer.sh 2011-05-15 17:46:23 UTC (rev 4501) +++ branches/QUADS_QUERY_BRANCH/src/resources/scripts/nanoSparqlServer.sh 2011-05-15 17:46:39 UTC (rev 4502) @@ -14,7 +14,7 @@ java ${JAVA_OPTS} \ -cp ${CLASSPATH} \ - com.bigdata.rdf.sail.bench.NanoSparqlServer \ + com.bigdata.rdf.sail.webapp.NanoSparqlServer \ $port \ $namespace \ ${BIGDATA_CONFIG} ${BIGDATA_CONFIG_OVERRIDES} This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-05-18 18:14:55
|
Revision: 4523 http://bigdata.svn.sourceforge.net/bigdata/?rev=4523&view=rev Author: thompsonbry Date: 2011-05-18 18:14:45 +0000 (Wed, 18 May 2011) Log Message: ----------- Merge INT64_BRANCH to QUADS_QUERY_BRANCH [r4485:r4522]. Conflicts: 1 (htree.xls). Added: 2 Updated: 85. The htree.xls conflict was resolved by accepting the incoming version. I am working on that file and I will handle the conflict when I merge this update back into my working copy. The branch is closed. See https://sourceforge.net/apps/trac/bigdata/ticket/294 Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata/src/architecture/segment math.xls branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bfs/AtomicBlockAppendProc.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/ap/SampleIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/cost/BTreeCostModel.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeStatistics.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeUtilizationReport.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BloomFilter.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Checkpoint.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IBTreeStatistics.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/ILinearList.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegment.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentCheckpoint.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentPlan.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Leaf.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/MutableLeafData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/MutableNodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Node.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/ResultSet.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/AbstractReadOnlyNodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/DefaultLeafCoder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/DefaultNodeCoder.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/IAbstractNodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/ILeafData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/INodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/data/ISpannedTupleCountData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/HTree.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/HashBucket.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/MutableBucketData.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/mdi/MetadataIndexView.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/relation/accesspath/AccessPath.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/AsynchronousOverflowTask.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/BTreeMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/BuildViewMetadata.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/JournalIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/OverflowManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/PurgeResult.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/ResourceManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/SplitUtility.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/resources/StoreManager.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/service/CommitTimeIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/service/MetadataService.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/ap/TestSampleIndex.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/AbstractBTreeTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestFindChild.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentCheckpoint.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentMultiBlockIterators.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentPlan.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestIndexSegmentWithBloomFilter.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/TestLinearListMethods.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/AbstractNodeDataRecordTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/AbstractNodeOrLeafDataRecordTestCase.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/MockLeafData.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/btree/data/MockNodeData.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/resources/TestBuildTask2.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/resources/TestFixedLengthPrefixShardSplits.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/resources/TestSparseRowStoreSplitHandler.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/service/TestDistributedTransactionServiceRestart.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/service/TestSnapshotHelper.java branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/service/ndx/pipeline/TestMasterTaskWithSplits.java branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/service/jini/util/DumpFederation.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/inf/Justification.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/XXXCShardSplitHandler.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/spo/TestXXXCShardSplitHandler.java Property Changed: ---------------- branches/QUADS_QUERY_BRANCH/ branches/QUADS_QUERY_BRANCH/bigdata/lib/jetty/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/aggregate/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/fast/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/rto/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/util/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/htree/raba/ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/jsr166/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/joinGraph/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/joinGraph/fast/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/joinGraph/rto/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/bop/util/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/jsr166/ branches/QUADS_QUERY_BRANCH/bigdata/src/test/com/bigdata/util/httpd/ branches/QUADS_QUERY_BRANCH/bigdata-compatibility/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/attr/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/disco/ branches/QUADS_QUERY_BRANCH/bigdata-jini/src/java/com/bigdata/util/config/ branches/QUADS_QUERY_BRANCH/bigdata-perf/ branches/QUADS_QUERY_BRANCH/bigdata-perf/bsbm/ branches/QUADS_QUERY_BRANCH/bigdata-perf/btc/ branches/QUADS_QUERY_BRANCH/bigdata-perf/btc/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/lib/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/answers (U1)/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/config/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/logging/ branches/QUADS_QUERY_BRANCH/bigdata-perf/lubm/src/resources/scripts/ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/ branches/QUADS_QUERY_BRANCH/bigdata-perf/uniprot/src/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/bop/rdf/aggregate/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/changesets/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/error/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/relation/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/util/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/samples/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/bop/rdf/aggregate/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/internal/ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/relation/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/changesets/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/bench/ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/ branches/QUADS_QUERY_BRANCH/dsi-utils/ branches/QUADS_QUERY_BRANCH/dsi-utils/LEGAL/ branches/QUADS_QUERY_BRANCH/dsi-utils/lib/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/java/it/unimi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/ branches/QUADS_QUERY_BRANCH/dsi-utils/src/test/it/unimi/dsi/ branches/QUADS_QUERY_BRANCH/lgpl-utils/src/java/it/unimi/dsi/fastutil/bytes/custom/ branches/QUADS_QUERY_BRANCH/lgpl-utils/src/test/it/unimi/dsi/fastutil/bytes/custom/ branches/QUADS_QUERY_BRANCH/osgi/ branches/QUADS_QUERY_BRANCH/src/resources/bin/config/ Property changes on: branches/QUADS_QUERY_BRANCH ___________________________________________________________________ Modified: svn:mergeinfo - /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/JOURNAL_HA_BRANCH:2596-4066 /branches/LARGE_LITERALS_REFACTOR:4175-4387 /branches/LEXICON_REFACTOR_BRANCH:2633-3304 /branches/bugfix-btm:2594-3237 /branches/dev-btm:2574-2730 /branches/fko:3150-3194 /trunk:3392-3437,3656-4061 + /branches/BTREE_BUFFER_BRANCH:2004-2045 /branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782 /branches/INT64_BRANCH:4486-4522 /branches/JOURNAL_HA_BRANCH:2596-4066 /branches/LARGE_LITERALS_REFACTOR:4175-4387 /branches/LEXICON_REFACTOR_BRANCH:2633-3304 /branches/bugfix-btm:2594-3237 /branches/dev-btm:2574-2730 /branches/fko:3150-3194 /trunk:3392-3437,3656-4061 Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/lib/jetty ___________________________________________________________________ Modified: svn:mergeinfo - + /branches/INT64_BRANCH/bigdata/lib/jetty:4486-4522 Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/architecture/segment math.xls =================================================================== (Binary files differ) Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bfs/AtomicBlockAppendProc.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bfs/AtomicBlockAppendProc.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bfs/AtomicBlockAppendProc.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -274,7 +274,7 @@ * key. Otherwise this will be the first block written for that * file. */ - int toIndex = tmp.indexOf(toKey); + long toIndex = tmp.indexOf(toKey); assert toIndex < 0 : "Expecting insertion point: id=" + id + ", version=" + version + ", toIndex=" + toIndex; @@ -285,7 +285,7 @@ toIndex = -(toIndex + 1); // convert to an index. // #of entries in the index. - final int entryCount = ((AbstractBTree) ndx).getEntryCount(); + final long entryCount = ((AbstractBTree) ndx).getEntryCount(); if (log.isDebugEnabled()) log.debug("toIndex=" + toIndex + ", entryCount=" + entryCount); Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/aggregate ___________________________________________________________________ Modified: svn:mergeinfo - + /branches/INT64_BRANCH/bigdata/src/java/com/bigdata/bop/aggregate:4486-4522 Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/ap/SampleIndex.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/ap/SampleIndex.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/ap/SampleIndex.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -368,13 +368,13 @@ */ /** The #of tuples to be skipped after every tuple visited. */ - private transient int skipCount; + private transient long skipCount; /** The #of tuples accepted so far. */ private transient int nread = 0; /** The inclusive lower bound of the first tuple actually visited. */ - private transient int fromIndex; + private transient long fromIndex; /** The exclusive upper bound of the last tuple which could be visited. */ - private transient int toIndex; + private transient long toIndex; /** * @@ -393,7 +393,7 @@ final AbstractBTree ndx = (AbstractBTree) src.getIndex(); - final int currentIndex = ndx.indexOf(tuple.getKey()); + final long currentIndex = ndx.indexOf(tuple.getKey()); if (nread == 0) { @@ -409,9 +409,9 @@ toIndex = -toIndex + 1; } - final int rangeCount = (toIndex - fromIndex); + final long rangeCount = (toIndex - fromIndex); - skipCount = Math.max(1, rangeCount / limit); + skipCount = Math.max(1L, rangeCount / limit); // minus one since src.next() already consumed one tuple. skipCount -= 1; @@ -429,7 +429,7 @@ * If the skip count is positive, then skip over N tuples. */ - final int nextIndex = Math.min(ndx.getEntryCount() - 1, + final long nextIndex = Math.min(ndx.getEntryCount() - 1, currentIndex + skipCount); src.seek(ndx.keyAt(nextIndex)); @@ -474,13 +474,13 @@ */ /** The offset of each tuple to be sampled. */ - private transient int[] offsets; + private transient long[] offsets; /** The #of tuples accepted so far. */ private transient int nread = 0; /** The inclusive lower bound of the first tuple actually visited. */ - private transient int fromIndex; + private transient long fromIndex; /** The exclusive upper bound of the last tuple which could be visited. */ - private transient int toIndex; + private transient long toIndex; /** * @@ -539,7 +539,7 @@ * Skip to the next tuple. */ - final int nextIndex = offsets[nread]; + final long nextIndex = offsets[nread]; // System.err.println("limit=" + limit + ", rangeCount=" // + (toIndex - fromIndex) + ", fromIndex=" + fromIndex @@ -688,8 +688,8 @@ * @throws IllegalArgumentException * unless <i>toIndex</i> is GT <i>fromIndex</i>. */ - int[] getOffsets(final long seed, int limit, final int fromIndex, - final int toIndex); + long[] getOffsets(long seed, int limit, long fromIndex, long toIndex); + } /** @@ -703,8 +703,8 @@ /** * {@inheritDoc} */ - public int[] getOffsets(final long seed, int limit, - final int fromIndex, final int toIndex) { + public long[] getOffsets(final long seed, int limit, + final long fromIndex, final long toIndex) { if (limit < 1) throw new IllegalArgumentException(); @@ -715,10 +715,15 @@ if (toIndex <= fromIndex) throw new IllegalArgumentException(); - final int rangeCount = (toIndex - fromIndex); + final long rangeCount = (toIndex - fromIndex); - if (limit > rangeCount) - limit = rangeCount; + if (limit > rangeCount) { + /* + * Note: cast valid since limit is int32 and limit LT rangeCount + * so rangeCount may be cast to int32. + */ + limit = (int) rangeCount; + } if (limit == rangeCount) { @@ -782,8 +787,8 @@ * if <i>limit!=rangeCount</i> (after adjusting for limits * greater than the rangeCount). */ - public int[] getOffsets(final long seed, int limit, - final int fromIndex, final int toIndex) { + public long[] getOffsets(final long seed, int limit, + final long fromIndex, final long toIndex) { if (limit < 1) throw new IllegalArgumentException(); @@ -794,16 +799,21 @@ if (toIndex <= fromIndex) throw new IllegalArgumentException(); - final int rangeCount = (toIndex - fromIndex); + final long rangeCount = (toIndex - fromIndex); - if (limit > rangeCount) - limit = rangeCount; + if (limit > rangeCount) { + /* + * Note: cast valid since limit is int32 and limit LT rangeCount + * so rangeCount may be cast to int32. + */ + limit = (int) rangeCount; + } if (limit != rangeCount) throw new UnsupportedOperationException(); // offsets of tuples to visit. - final int[] offsets = new int[limit]; + final long[] offsets = new long[limit]; for (int i = 0; i < limit; i++) { @@ -832,8 +842,18 @@ */ static public class BitVectorOffsetSampler implements IOffsetSampler { - public int[] getOffsets(final long seed, int limit, - final int fromIndex, final int toIndex) { + /** + * {@inheritDoc} + * <p> + * Note: The utility of this class is limited to smaller range counts + * (32k is fine, 2x or 4k that is also Ok) so it will reject anything + * with a very large range count. + * + * @throws UnsupportedOperationException + * if the rangeCount is GT {@link Integer#MAX_VALUE} + */ + public long[] getOffsets(final long seed, int limit, + final long fromIndex, final long toIndex) { if (limit < 1) throw new IllegalArgumentException(); @@ -844,13 +864,25 @@ if (toIndex <= fromIndex) throw new IllegalArgumentException(); - final int rangeCount = (toIndex - fromIndex); + final long rangeCount2 = (toIndex - fromIndex); - if (limit > rangeCount) + if (rangeCount2 > Integer.MAX_VALUE) { + /* + * The utility of this class is limited to smaller range counts + * so it will reject anything with a very large range count. + */ + throw new UnsupportedOperationException(); + } + + // known to be an int32 value. + final int rangeCount = (int) rangeCount2; + + if (limit > rangeCount) { limit = rangeCount; + } // offsets of tuples to visit. - final int[] offsets = new int[limit]; + final long [] offsets = new long [limit]; // create a cleared bit vector of the stated capacity. final BitVector v = LongArrayBitVector.ofLength(// @@ -921,8 +953,17 @@ */ static public class AcceptanceSetOffsetSampler implements IOffsetSampler { - public int[] getOffsets(final long seed, int limit, - final int fromIndex, final int toIndex) { + /** + * {@inheritDoc} + * <p> + * Note: The utility of this class is limited to moderate range counts + * (~100k) so it will reject anything with a very large range count. + * + * @throws UnsupportedOperationException + * if the rangeCount is GT {@link Integer#MAX_VALUE} + */ + public long[] getOffsets(final long seed, int limit, + final long fromIndex, final long toIndex) { if (limit < 1) throw new IllegalArgumentException(); @@ -933,13 +974,19 @@ if (toIndex <= fromIndex) throw new IllegalArgumentException(); - final int rangeCount = (toIndex - fromIndex); + final long rangeCount2 = (toIndex - fromIndex); - if (limit > rangeCount) + if (rangeCount2 > Integer.MAX_VALUE) + throw new UnsupportedOperationException(); + + final int rangeCount = (int) rangeCount2; + + if (limit > rangeCount) { limit = rangeCount; + } // offsets of tuples to visit. - final int[] offsets = new int[limit]; + final long [] offsets = new long[limit]; // hash set of accepted offsets. final IntOpenHashSet v = new IntOpenHashSet( Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/cost/BTreeCostModel.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/cost/BTreeCostModel.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/cost/BTreeCostModel.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -31,9 +31,6 @@ import com.bigdata.btree.AbstractBTree; import com.bigdata.btree.BTree; -import com.bigdata.btree.BTreeUtilizationReport; -import com.bigdata.btree.IBTreeStatistics; -import com.bigdata.btree.IBTreeUtilizationReport; import com.bigdata.journal.Journal; /** @@ -235,54 +232,55 @@ } - private static class MockBTreeStatistics implements IBTreeStatistics { +// private static class MockBTreeStatistics implements IBTreeStatistics { +// +// private final int m; +// +// private final int height; +// +// private final long leafCount; +// +// private final long nodeCount; +// +// private final long entryCount; +// +// private final IBTreeUtilizationReport utilReport; +// +// public MockBTreeStatistics(final int m, final int height, +// final long nodeCount, final long leafCount, +// final long entryCount) { +// this.m = m; +// this.height = height; +// this.nodeCount = nodeCount; +// this.leafCount = leafCount; +// this.entryCount = entryCount; +// this.utilReport = new BTreeUtilizationReport(this); +// } +// +// public int getBranchingFactor() { +// return m; +// } +// +// public int getHeight() { +// return height; +// } +// +// public long getNodeCount() { +// return nodeCount; +// } +// +// public long getLeafCount() { +// return leafCount; +// } +// +// public long getEntryCount() { +// return entryCount; +// } +// +// public IBTreeUtilizationReport getUtilization() { +// return utilReport; +// } +// +// } // MockBTreeStatistics - private final int m; - - private final int entryCount; - - private final int height; - - private final int leafCount; - - private final int nodeCount; - - private final IBTreeUtilizationReport utilReport; - - public MockBTreeStatistics(final int m, final int entryCount, - final int height, final int leafCount, final int nodeCount) { - this.m = m; - this.entryCount = entryCount; - this.height = height; - this.leafCount = leafCount; - this.nodeCount = nodeCount; - this.utilReport = new BTreeUtilizationReport(this); - } - - public int getBranchingFactor() { - return m; - } - - public int getEntryCount() { - return entryCount; - } - - public int getHeight() { - return height; - } - - public int getLeafCount() { - return leafCount; - } - - public int getNodeCount() { - return nodeCount; - } - - public IBTreeUtilizationReport getUtilization() { - return utilReport; - } - - } - } Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph ___________________________________________________________________ Modified: svn:mergeinfo - + /branches/INT64_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph:4486-4522 Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/fast ___________________________________________________________________ Deleted: svn:mergeinfo - Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/joinGraph/rto ___________________________________________________________________ Deleted: svn:mergeinfo - Property changes on: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/bop/util ___________________________________________________________________ Modified: svn:mergeinfo - + /branches/INT64_BRANCH/bigdata/src/java/com/bigdata/bop/util:4486-4522 Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -802,13 +802,13 @@ tmp.addCounter("height", new OneShotInstrument<Integer>(getHeight())); - tmp.addCounter("nodeCount", new OneShotInstrument<Integer>( + tmp.addCounter("nodeCount", new OneShotInstrument<Long>( getNodeCount())); - tmp.addCounter("leafCount", new OneShotInstrument<Integer>( + tmp.addCounter("leafCount", new OneShotInstrument<Long>( getLeafCount())); - tmp.addCounter("tupleCount", new OneShotInstrument<Integer>( + tmp.addCounter("tupleCount", new OneShotInstrument<Long>( getEntryCount())); /* @@ -842,15 +842,15 @@ * to avoid dragging in the B+Tree reference. */ - final int entryCount = getEntryCount(); + final long entryCount = getEntryCount(); final long bytes = btreeCounters.bytesOnStore_nodesAndLeaves.get() + btreeCounters.bytesOnStore_rawRecords.get(); - final int bytesPerTuple = (int) (entryCount == 0 ? 0d + final long bytesPerTuple = (long) (entryCount == 0 ? 0d : (bytes / entryCount)); - tmp.addCounter("bytesPerTuple", new OneShotInstrument<Integer>( + tmp.addCounter("bytesPerTuple", new OneShotInstrument<Long>( bytesPerTuple)); } @@ -1606,24 +1606,11 @@ abstract public int getHeight(); - abstract public int getNodeCount(); + abstract public long getNodeCount(); - abstract public int getLeafCount(); + abstract public long getLeafCount(); - /** - * {@inheritDoc} - * - * @todo this could be re-defined as the exact entry count if we tracked the - * #of deleted index entries and subtracted that from the total #of - * index entries before returning the result. The #of deleted index - * entries would be stored in the index {@link Checkpoint} record. - * <p> - * Since {@link #getEntryCount()} is also used to give the total #of - * index enties (and we need that feature) so we need to either add - * another method with the appropriate semantics, add a boolean flag, - * or add a method returning the #of deleted entries, etc. - */ - abstract public int getEntryCount(); + abstract public long getEntryCount(); /** * Return a statistics snapshot of the B+Tree. @@ -2343,7 +2330,7 @@ } - public int indexOf(final byte[] key) { + public long indexOf(final byte[] key) { if (key == null) throw new IllegalArgumentException(); @@ -2357,7 +2344,7 @@ } - public byte[] keyAt(final int index) { + public byte[] keyAt(final long index) { if (index < 0) throw new IndexOutOfBoundsException(ERROR_LESS_THAN_ZERO); @@ -2371,7 +2358,7 @@ } - public byte[] valueAt(final int index) { + public byte[] valueAt(final long index) { final Tuple tuple = getLookupTuple(); @@ -2381,7 +2368,7 @@ } - final public Tuple valueAt(final int index, final Tuple tuple) { + final public Tuple valueAt(final long index, final Tuple tuple) { if (index < 0) throw new IndexOutOfBoundsException(ERROR_LESS_THAN_ZERO); @@ -2516,9 +2503,9 @@ if (toKey != null) assert rangeCheck(toKey,true); - int fromIndex = (fromKey == null ? 0 : root.indexOf(fromKey)); + long fromIndex = (fromKey == null ? 0 : root.indexOf(fromKey)); - int toIndex = (toKey == null ? getEntryCount() : root.indexOf(toKey)); + long toIndex = (toKey == null ? getEntryCount() : root.indexOf(toKey)); // Handle case when fromKey is not found. if (fromIndex < 0) @@ -3183,22 +3170,22 @@ if (info) { + final int branchingFactor = getBranchingFactor(); + final int height = getHeight(); - final int nnodes = getNodeCount(); + final long nnodes = getNodeCount(); - final int nleaves = getLeafCount(); + final long nleaves = getLeafCount(); - final int nentries = getEntryCount(); + final long nentries = getEntryCount(); - final int branchingFactor = getBranchingFactor(); - - out.println("height=" + height + ", branchingFactor=" - + branchingFactor + ", #nodes=" + nnodes + ", #leaves=" - + nleaves + ", #entries=" + nentries + ", nodeUtil=" - + utils.getNodeUtilization() + "%, leafUtil=" - + utils.getLeafUtilization() + "%, utilization=" - + utils.getTotalUtilization() + "%"); + out.println("branchingFactor=" + branchingFactor + ", height=" + + height + ", #nodes=" + nnodes + ", #leaves=" + nleaves + + ", #entries=" + nentries + ", nodeUtil=" + + utils.getNodeUtilization() + "%, leafUtil=" + + utils.getLeafUtilization() + "%, utilization=" + + utils.getTotalUtilization() + "%"); } if (root != null) { Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractNode.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -57,7 +57,7 @@ * DO-NOT-USE-GENERIC-HERE. The compiler will fail under Linux (JDK 1.6.0_14, * _16). */ -> extends PO implements IAbstractNode, IAbstractNodeData, IKeysData, ISpannedTupleCountData { +> extends PO implements IAbstractNode, IAbstractNodeData, IKeysData { /** * Log for node and leaf operations. @@ -1214,7 +1214,7 @@ * this guarantees that the return value will be >= 0 if and only if * the key is found. */ - abstract public int indexOf(byte[] searchKey); + abstract public long indexOf(byte[] searchKey); /** * Recursive search locates the entry at the specified index position in the @@ -1231,7 +1231,7 @@ * @exception IndexOutOfBoundsException * if index is greater than the #of entries. */ - abstract public byte[] keyAt(int index); + abstract public byte[] keyAt(long index); /** * Recursive search locates the entry at the specified index position in the @@ -1249,7 +1249,7 @@ * @exception IndexOutOfBoundsException * if index is greater than the #of entries. */ - abstract public void valueAt(int index, Tuple tuple); + abstract public void valueAt(long index, Tuple tuple); /** * Dump the data onto the {@link PrintStream} (non-recursive). Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -168,19 +168,19 @@ } - final public int getNodeCount() { + final public long getNodeCount() { return nnodes; } - final public int getLeafCount() { + final public long getLeafCount() { return nleaves; } - final public int getEntryCount() { + final public long getEntryCount() { return nentries; @@ -252,18 +252,27 @@ /** * The #of non-leaf nodes in the btree. The is zero (0) for a new btree. */ - protected int nnodes; + protected long nnodes; /** * The #of leaf nodes in the btree. This is one (1) for a new btree. */ - protected int nleaves; + protected long nleaves; /** * The #of entries in the btree. This is zero (0) for a new btree. */ - protected int nentries; + protected long nentries; + /** + * The value of the record version number that will be assigned to the next + * node or leaf written onto the backing store. This number is incremented + * each time a node or leaf is written onto the backing store. The initial + * value is ZERO (0). The first value assigned to a node or leaf will be + * ZERO (0). + */ + protected long recordVersion; + /** * The mutable counter exposed by #getCounter()}. * <p> @@ -417,6 +426,8 @@ this.counter = new AtomicLong( checkpoint.getCounter() ); + this.recordVersion = checkpoint.getRecordVersion(); + } /** Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeStatistics.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeStatistics.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeStatistics.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -44,22 +44,22 @@ private final int m; - private final int entryCount; - private final int height; - private final int leafCount; + private final long nodeCount; - private final int nodeCount; + private final long leafCount; + private final long entryCount; + private final IBTreeUtilizationReport utilReport; public BTreeStatistics(final AbstractBTree btree) { this.m = btree.getBranchingFactor(); - this.entryCount = btree.getEntryCount(); this.height = btree.getHeight(); + this.nodeCount = btree.getNodeCount(); this.leafCount = btree.getLeafCount(); - this.nodeCount = btree.getNodeCount(); + this.entryCount = btree.getEntryCount(); this.utilReport = btree.getUtilization(); } @@ -67,20 +67,20 @@ return m; } - public int getEntryCount() { - return entryCount; - } - public int getHeight() { return height; } - public int getLeafCount() { + public long getNodeCount() { + return nodeCount; + } + + public long getLeafCount() { return leafCount; } - public int getNodeCount() { - return nodeCount; + public long getEntryCount() { + return entryCount; } public IBTreeUtilizationReport getUtilization() { Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeUtilizationReport.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeUtilizationReport.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeUtilizationReport.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -51,13 +51,13 @@ public BTreeUtilizationReport(final IBTreeStatistics stats) { - final int nnodes = stats.getNodeCount(); + final long nnodes = stats.getNodeCount(); - final int nleaves = stats.getLeafCount(); + final long nleaves = stats.getLeafCount(); - final int nentries = stats.getEntryCount(); + final long nentries = stats.getEntryCount(); - final int numNonRootNodes = nnodes + nleaves - 1; + final long numNonRootNodes = nnodes + nleaves - 1; final int branchingFactor = stats.getBranchingFactor(); Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BloomFilter.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BloomFilter.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/BloomFilter.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -55,11 +55,7 @@ */ public class BloomFilter implements IBloomFilter, Externalizable { - protected static final transient Logger log = Logger.getLogger(BloomFilter.class); - -// protected static final transient boolean INFO = log.isInfoEnabled(); -// -// protected static final transient boolean DEBUG = log.isDebugEnabled(); + private static final transient Logger log = Logger.getLogger(BloomFilter.class); /** * Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Checkpoint.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Checkpoint.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/Checkpoint.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -36,13 +36,15 @@ private long addrMetadata; private long addrRoot; // of root node/leaf for BTree; rootDir for HTree. private int height; // height for BTree; globalDepth for HTree. - private int nnodes; // #of directories for HTree - private int nleaves; // #of buckets for HTree. - private int nentries; // #of tuples in the index. - private long counter; + private long nnodes; // #of directories for HTree + private long nleaves; // #of buckets for HTree. + private long nentries; // #of tuples in the index. + private long counter; // B+Tree local counter. private long addrBloomFilter; + private long recordVersion; // #of node or leaf records written to date. + /** * Added in {@link #VERSION1}. This is a short field allowing for 65536 * different possible index types. @@ -181,7 +183,7 @@ /** * The #of non-leaf nodes (B+Tree) or directories (HTree). */ - public final int getNodeCount() { + public final long getNodeCount() { return nnodes; @@ -190,7 +192,7 @@ /** * The #of leaves (B+Tree) or hash buckets (HTree). */ - public final int getLeafCount() { + public final long getLeafCount() { return nleaves; @@ -199,22 +201,34 @@ /** * The #of index entries (aka tuple count). */ - public final int getEntryCount() { + public final long getEntryCount() { return nentries; } - /** - * Return the value of the counter stored in the {@link Checkpoint} - * record. - */ + /** + * Return the value of the B+Tree local counter stored in the + * {@link Checkpoint} record. + */ public final long getCounter() { return counter; } + /** + * Return the value of the next record version number to be assigned that is + * stored in the {@link Checkpoint} record. This number is incremented each + * time a node or leaf is written onto the backing store. The initial value + * is ZERO (0). The first value assigned to a node or leaf will be ZERO (0). + */ + public final long getRecordVersion() { + + return counter; + + } + /** * A human readable representation of the state of the {@link Checkpoint} * record. @@ -262,10 +276,11 @@ 0L,// No root yet. 0L,// No bloom filter yet. 0, // height - 0, // nnodes - 0, // nleaves - 0, // nentries + 0L, // nnodes + 0L, // nleaves + 0L, // nentries 0L, // counter + 0L, // recordVersion IndexTypeEnum.BTree // indexType ); @@ -292,10 +307,11 @@ 0L,// No root yet. 0L,// No bloom filter yet. 0, // height - 0, // nnodes - 0, // nleaves - 0, // nentries + 0L, // nnodes + 0L, // nleaves + 0L, // nentries oldCheckpoint.counter,// + 0L, // recordVersion IndexTypeEnum.BTree// ); @@ -350,14 +366,16 @@ btree.nleaves,// btree.nentries,// btree.counter.get(),// + btree.recordVersion,// IndexTypeEnum.BTree// ); } private Checkpoint(final long addrMetadata, final long addrRoot, - final long addrBloomFilter, final int height, final int nnodes, - final int nleaves, final int nentries, final long counter, + final long addrBloomFilter, final int height, final long nnodes, + final long nleaves, final long nentries, final long counter, + final long recordVersion, final IndexTypeEnum indexType) { assert indexType != null; @@ -389,7 +407,9 @@ this.nentries = nentries; this.counter = counter; - + + this.recordVersion = recordVersion; + this.indexType = indexType; } @@ -412,11 +432,36 @@ * which is present only for {@link IndexTypeEnum#HTree}. */ private static transient final int VERSION1 = 0x1; + + /** + * Adds and/or modifies the following fields. + * <dl> + * <dt>nodeCount</dt> + * <dd>Changed from int32 to int64.</dd> + * <dt>leafCount</dt> + * <dd>Changed from int32 to int64.</dd> + * <dt>entryCount</dt> + * <dd>Changed from int32 to int64.</dd> + * <dt>recordVersion</dt> + * <dd>Added a new field which record the <em>next</em> record version + * identifier to be used when the next node or leaf data record is written + * onto the backing store. The field provides sequential numbering of those + * data record which can facilitate certain kinds of forensics. For example, + * all updated records falling between two checkpoints may be identified by + * a file scan filtering for the index UUID and a record version number GT + * the last record version written for the first checkpoint and LTE the last + * record version number written for the second checkpoint. The initial + * value is ZERO (0).</dd> + * </dl> + * In addition, the <strong>size</strong> of the checkpoint record has been + * increased in order to provide room for future expansions. + */ + private static transient final int VERSION2 = 0x2; /** * The current version. */ - private static transient final int VERSION = VERSION1; + private static transient final int currentVersion = VERSION2; /** * Write the {@link Checkpoint} record on the store, setting @@ -474,25 +519,42 @@ switch (version) { case VERSION0: case VERSION1: + case VERSION2: break; default: throw new IOException("Unknown version: " + version); } - this.addrMetadata = in.readLong(); + this.addrMetadata = in.readLong(); - this.addrRoot = in.readLong(); + this.addrRoot = in.readLong(); - this.addrBloomFilter = in.readLong(); - - this.height = in.readInt(); + this.addrBloomFilter = in.readLong(); - this.nnodes = in.readInt(); + this.height = in.readInt(); - this.nleaves = in.readInt(); + if (version <= VERSION1) { - this.nentries = in.readInt(); + this.nnodes = in.readInt(); + this.nleaves = in.readInt(); + + this.nentries = in.readInt(); + + this.recordVersion = 0L; + + } else { + + this.nnodes = in.readLong(); + + this.nleaves = in.readLong(); + + this.nentries = in.readLong(); + + this.recordVersion = in.readLong(); + + } + this.counter = in.readLong(); switch (version) { @@ -501,6 +563,7 @@ indexType = IndexTypeEnum.BTree; break; case VERSION1: + case VERSION2: this.indexType = IndexTypeEnum.valueOf(in.readShort()); in.readShort();// ignored. in.readInt();// ignored. @@ -510,12 +573,23 @@ } in.readLong(); // unused. + + if (version >= VERSION2) { + + // Read some additional padding added to the record in VERSION2. + for (int i = 0; i < 10; i++) { + + in.readLong(); + + } + + } } public void writeExternal(final ObjectOutput out) throws IOException { - out.writeInt(VERSION); + out.writeInt(currentVersion); out.writeLong(addrMetadata); @@ -525,12 +599,33 @@ out.writeInt(height); - out.writeInt(nnodes); + if (currentVersion <= VERSION1) { - out.writeInt(nleaves); + if (nnodes > Integer.MAX_VALUE) + throw new RuntimeException(); + if (nleaves > Integer.MAX_VALUE) + throw new RuntimeException(); + if (nentries > Integer.MAX_VALUE) + throw new RuntimeException(); + + out.writeInt((int)nnodes); - out.writeInt(nentries); + out.writeInt((int)nleaves); + out.writeInt((int)nentries); + + } else { + + out.writeLong(nnodes); + + out.writeLong(nleaves); + + out.writeLong(nentries); + + out.writeLong(recordVersion); + + } + out.writeLong(counter); /* @@ -541,12 +636,21 @@ out.writeShort(0/* unused */); out.writeInt(0/* unused */); - /* - * 8 bytes follow. - */ + /* + * 8 bytes follow. + */ out.writeLong(0L/* unused */); + /* + * Additional space added in VERSION2. + */ + for (int i = 0; i < 10; i++) { + + out.writeLong(0L/* unused */); + + } + } } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IBTreeStatistics.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IBTreeStatistics.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IBTreeStatistics.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -59,20 +59,20 @@ * The #of non-leaf nodes in the {@link AbstractBTree}. This is zero (0) * for a new btree. */ - int getNodeCount(); + long getNodeCount(); /** * The #of leaf nodes in the {@link AbstractBTree}. This is one (1) for a * new btree. */ - int getLeafCount(); + long getLeafCount(); /** * The #of entries (aka tuples) in the {@link AbstractBTree}. This is zero * (0) for a new B+Tree. When the B+Tree supports delete markers, this value * also includes tuples which have been marked as deleted. */ - int getEntryCount(); + long getEntryCount(); /** * Computes and returns the utilization of the tree. The utilization figures Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/ILinearList.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/ILinearList.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/ILinearList.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -29,7 +29,18 @@ /** * Interface for methods that return or accept an ordinal index into the entries - * in the B+-TRee. + * in the B+-Tree. The semantics of this interface are build over the #of + * spanned tuples for each child as recorded within each node of the B+Tree. + * This provides a fast means to compute the linear index into the B+Tree of any + * given tuple. However, this interface is only available for a local B+Tree + * object (versus scale-out) since the spanned tuple count metadata is not exact + * across shards. Further, when delete markers are used, the deleted tuples + * remain in the B+Tree and the {@link ILinearList} interface will continue to + * count them until they have been purged. Thus deleting a tuple does not change + * the {@link #indexOf(byte[])} keys after that tuple, {@link #keyAt(long)} can + * return the key for a deleted tuple, and {@link #valueAt(long)} will return + * <code>null</code> if the tuple at that index is marked as deleted within the + * B+Tree. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ @@ -59,10 +70,10 @@ * in the index. * * - * @see #keyAt(int) - * @see #valueAt(int) + * @see #keyAt(long) + * @see #valueAt(long) */ - public int indexOf(byte[] key); + public long indexOf(byte[] key); /** * Return the key for the identified entry. This performs an efficient @@ -72,17 +83,17 @@ * @param index * The index position of the entry (origin zero). * - * @return The key at that index position (not a copy). + * @return The key at that index position. * * @exception IndexOutOfBoundsException * if index is less than zero. * @exception IndexOutOfBoundsException * if index is greater than or equal to the #of entries. * - * @see #indexOf(Object) - * @see #valueAt(int) + * @see #indexOf(byte[]) + * @see #valueAt(long) */ - public byte[] keyAt(int index); + public byte[] keyAt(long index); /** * Return the value for the identified entry. This performs an efficient @@ -100,9 +111,9 @@ * @exception IndexOutOfBoundsException * if index is greater than or equal to the #of entries. * - * @see #indexOf(Object) - * @see #keyAt(int) + * @see #indexOf(byte[]) + * @see #keyAt(long) */ - public byte[] valueAt(int index); + public byte[] valueAt(long index); } Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegment.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegment.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegment.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -129,23 +129,23 @@ } - final public int getLeafCount() { + final public long getNodeCount() { reopen(); - return fileStore.getCheckpoint().nleaves; + return fileStore.getCheckpoint().nnodes; } - final public int getNodeCount() { + final public long getLeafCount() { reopen(); - return fileStore.getCheckpoint().nnodes; + return fileStore.getCheckpoint().nleaves; } - final public int getEntryCount() { + final public long getEntryCount() { reopen(); @@ -718,10 +718,7 @@ /** * @param btree * @param addr - * @param branchingFactor - * @param nentries - * @param keys - * @param childKeys + * @param data */ protected ImmutableNode(final AbstractBTree btree, final long addr, final INodeData data) { @@ -843,9 +840,9 @@ return Long.MAX_VALUE; } - final public int getSpannedTupleCount() { - return 0; - } +// final public int getSpannedTupleCount() { +// return 0; +// } final public boolean isLeaf() { return true; Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentBuilder.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -48,7 +48,6 @@ import com.bigdata.btree.data.IAbstractNodeData; import com.bigdata.btree.data.ILeafData; import com.bigdata.btree.data.INodeData; -import com.bigdata.btree.data.ISpannedTupleCountData; import com.bigdata.btree.raba.IRaba; import com.bigdata.btree.raba.MutableKeyBuffer; import com.bigdata.btree.raba.MutableValueBuffer; @@ -261,7 +260,7 @@ /** * The value specified to the ctor. */ - final public int entryCount; + final public long entryCount; /** * The iterator specified to the ctor. This is the source for the keys and @@ -557,7 +556,7 @@ /** * The #of tuples written for the output tree. */ - int ntuplesWritten; + long ntuplesWritten; /** * The #of nodes written for the output tree. This will be zero if all @@ -1106,7 +1105,7 @@ public static IndexSegmentBuilder newInstance(// final File outFile,// final File tmpDir,// - final int entryCount,// + final long entryCount,// final ITupleIterator<?> entryIterator, // final int m,// final IndexMetadata metadata,// @@ -1196,7 +1195,7 @@ protected IndexSegmentBuilder(// final File outFile,// final File tmpDir,// - final int entryCount,// + final long entryCount,// final ITupleIterator<?> entryIterator, // final int m,// IndexMetadata metadata,// @@ -1415,21 +1414,26 @@ stack[plan.height] = leaf; - /* - * Setup optional bloom filter. - * - * Note: For read-only {@link IndexSegment} we always know the #of - * keys exactly at the time that we provision the bloom filter. This - * makes it easy for us to tune the filter for a desired false - * positive rate. - */ - if (metadata.getBloomFilterFactory() != null && plan.nentries > 0) { + /* + * Setup optional bloom filter. + * + * Note: For read-only {@link IndexSegment} we always know the #of + * keys exactly at the time that we provision the bloom filter. This + * makes it easy for us to tune the filter for a desired false + * positive rate. + * + * Note: The bloom filter can not be used with very large indices + * due to the space requirements of the filter. However, very large + * in this case is MAX_INT tuples! + */ + if (metadata.getBloomFilterFactory() != null && plan.nentries > 0 + && plan.nentries < Integer.MAX_VALUE) { // the desired error rate for the bloom filter. final double p = metadata.getBloomFilterFactory().p; - // create the bloom filter. - bloomFilter = new BloomFilter(plan.nentries, p); + // create the bloom filter. + bloomFilter = new BloomFilter((int) plan.nentries, p); } else { @@ -2065,7 +2069,8 @@ final AbstractSimpleNodeData child) { // #of entries spanned by this node. - final int nentries = child.getSpannedTupleCount(); + final long nentries = (child.isLeaf() ? child.getKeyCount() + : ((INodeData) child).getSpannedTupleCount()); if (parent.nchildren == parent.max) { @@ -3238,11 +3243,21 @@ outChannel.position(0); + /* + * Note: The build plan is restricted to MAX_INT leaves and there + * are always more leaves than nodes in a B+Tree both nnodes and + * nleaves are int32 values. + */ + if(nnodesWritten>Integer.MAX_VALUE) + throw new AssertionError(); + if(nleavesWritten>Integer.MAX_VALUE) + throw new AssertionError(); + final IndexSegmentCheckpoint md = new IndexSegmentCheckpoint( addressManager.getOffsetBits(), // plan.height, // will always be correct. - nleavesWritten, // actual #of leaves written. - nnodesWritten, // actual #of nodes written. + (int)nleavesWritten, // actual #of leaves written. + (int)nnodesWritten, // actual #of nodes written. ntuplesWritten, // actual #of tuples written. maxNodeOrLeafLength,// offsetLeaves, extentLeaves, offsetNodes, extentNodes, @@ -3292,7 +3307,7 @@ * Thompson</a> */ abstract protected static class AbstractSimpleNodeData implements - IAbstractNodeData, ISpannedTupleCountData { + IAbstractNodeData { /** * The level in the output tree for this node or leaf (origin zero). The @@ -3478,11 +3493,11 @@ } - final public int getSpannedTupleCount() { - - return keys.size(); - - } +// final public int getSpannedTupleCount() { +// +// return keys.size(); +// +// } final public int getValueCount() { @@ -3632,12 +3647,12 @@ /** * The #of entries spanned by this node. */ - int nentries; + long nentries; /** * The #of entries spanned by each child of this node. */ - final int[] childEntryCount; + final long [] childEntryCount; /** * <code>true</code> iff the node is tracking the min/max tuple revision @@ -3645,7 +3660,7 @@ */ final boolean hasVersionTimestamps; - final public int getSpannedTupleCount() { + final public long getSpannedTupleCount() { return nentries; @@ -3660,7 +3675,7 @@ } - final public int getChildEntryCount(final int index) { + final public long getChildEntryCount(final int index) { if (index < 0 || index > keys.size() + 1) throw new IllegalArgumentException(); @@ -3676,7 +3691,7 @@ this.childAddr = new long[m]; - this.childEntryCount = new int[m]; + this.childEntryCount = new long[m]; this.hasVersionTimestamps = hasVersionTimestamps; Modified: branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentCheckpoint.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentCheckpoint.java 2011-05-18 17:58:13 UTC (rev 4522) +++ branches/QUADS_QUERY_BRANCH/bigdata/src/java/com/bigdata/btree/IndexSegmentCheckpoint.java 2011-05-18 18:14:45 UTC (rev 4523) @@ -86,15 +86,20 @@ static final int SIZEOF_TIMESTAMP = Bytes.SIZEOF_LONG; static final int SI... [truncated message content] |
From: <tho...@us...> - 2011-05-20 15:05:07
|
Revision: 4530 http://bigdata.svn.sourceforge.net/bigdata/?rev=4530&view=rev Author: thompsonbry Date: 2011-05-20 15:04:59 +0000 (Fri, 20 May 2011) Log Message: ----------- Changed the default minRelevance value to 0.25 (from 0.0) and updated the unit tests and code to match. Modified Paths: -------------- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/ITextIndexer.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestFullTextIndex.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlClient.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEvaluationStrategyImpl.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestNamedGraphs.java branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/ITextIndexer.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/ITextIndexer.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/lexicon/ITextIndexer.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -79,40 +79,42 @@ */ public boolean getIndexDatatypeLiterals(); - /** - * Do free text search - * - * @param query - * The query (it will be parsed into tokens). - * @param languageCode - * The language code that should be used when tokenizing the - * query -or- <code>null</code> to use the default {@link Locale} - * ). - * @param prefixMatch - * When <code>true</code>, the matches will be on tokens which - * include the query tokens as a prefix. This includes exact - * matches as a special case when the prefix is the entire token, - * but it also allows longer matches. For example, - * <code>free</code> will be an exact match on <code>free</code> - * but a partial match on <code>freedom</code>. When - * <code>false</code>, only exact matches will be made. - * @param minCosine - * The minimum cosine that will be returned. - * @param maxCosine - * The maximum cosine that will be returned. Useful for - * evaluating in relevance ranges. - * @param maxRank - * The upper bound on the #of hits in the result set. - * @param matchAllTerms - * if true, return only hits that match all search terms - * @param timeout - * The timeout -or- ZERO (0) for NO timeout (this is equivalent - * to using {@link Long#MAX_VALUE}). - * @param unit - * The unit in which the timeout is expressed. - * - * @return The result set. - */ + /** + * Do free text search + * + * @param query + * The query (it will be parsed into tokens). + * @param languageCode + * The language code that should be used when tokenizing the + * query -or- <code>null</code> to use the default {@link Locale} + * ). + * @param prefixMatch + * When <code>true</code>, the matches will be on tokens which + * include the query tokens as a prefix. This includes exact + * matches as a special case when the prefix is the entire token, + * but it also allows longer matches. For example, + * <code>free</code> will be an exact match on <code>free</code> + * but a partial match on <code>freedom</code>. When + * <code>false</code>, only exact matches will be made. + * @param minCosine + * The minimum cosine that will be returned (in [0:maxCosine]). + * If you specify a minimum cosine of ZERO (0.0) you can drag in + * a lot of basically useless search results. + * @param maxCosine + * The maximum cosine that will be returned (in [minCosine:1.0]). + * Useful for evaluating in relevance ranges. + * @param maxRank + * The upper bound on the #of hits in the result set. + * @param matchAllTerms + * if true, return only hits that match all search terms + * @param timeout + * The timeout -or- ZERO (0) for NO timeout (this is equivalent + * to using {@link Long#MAX_VALUE}). + * @param unit + * The unit in which the timeout is expressed. + * + * @return The result set. + */ public Hiterator<A> search(final String query, final String languageCode, final boolean prefixMatch, final double minCosine, final double maxCosine, Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/store/BD.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -165,40 +165,63 @@ * } * * </pre> + * + * The default is {@value #DEFAULT_MAX_HITS}. */ final URI MAX_HITS = new URIImpl(SEARCH_NAMESPACE+"maxHits"); - + /** - * Magic predicate used to query for free text search metadata. Use - * in conjunction with {@link #SEARCH} as follows: - * <p> - * <pre> - * - * select ?s - * where { - * ?s bd:search "scale-out RDF triplestore" . - * ?s bd:minRelevance "0.5"^^xsd:double . - * } - * - * </pre> + * The default for {@link #MAX_HITS}. */ + final int DEFAULT_MAX_HITS = Integer.MAX_VALUE; + + /** + * Magic predicate used to query for free text search metadata. Use in + * conjunction with {@link #SEARCH} as follows: + * <p> + * + * <pre> + * + * select ?s + * where { + * ?s bd:search "scale-out RDF triplestore" . + * ?s bd:minRelevance "0.5"^^xsd:double . + * } + * + * </pre> + * + * The relevance scores are in [0.0:1.0]. You should NOT specify a minimum + * relevance of ZERO (0.0) as this can drag in way too many unrelated + * results. The default is {@value #DEFAULT_MIN_RELEVANCE}. + */ final URI MIN_RELEVANCE = new URIImpl(SEARCH_NAMESPACE+"minRelevance"); + final double DEFAULT_MIN_RELEVANCE = 0.25d; + + /** + * Magic predicate used to query for free text search metadata. Use in + * conjunction with {@link #SEARCH} as follows: + * <p> + * + * <pre> + * + * select ?s + * where { + * ?s bd:search "scale-out RDF triplestore" . + * ?s bd:maxRelevance "0.9"^^xsd:double . + * } + * + * </pre> + * + * The relevance scores are in [0.0:1.0]. The default maximum relevance is + * {@value #DEFAULT_MAX_RELEVANCE}. + */ + final URI MAX_RELEVANCE = new URIImpl(SEARCH_NAMESPACE+"maxRelevance"); + /** - * Magic predicate used to query for free text search metadata. Use - * in conjunction with {@link #SEARCH} as follows: - * <p> - * <pre> - * - * select ?s - * where { - * ?s bd:search "scale-out RDF triplestore" . - * ?s bd:maxRelevance "0.9"^^xsd:double . - * } - * - * </pre> + * The default value for {@link #MAX_RELEVANCE} unless overridden. */ - final URI MAX_RELEVANCE = new URIImpl(SEARCH_NAMESPACE+"maxRelevance"); + final double DEFAULT_MAX_RELEVANCE = 1.0d; /** * Magic predicate used to query for free text search metadata. Use @@ -216,7 +239,16 @@ */ final URI MATCH_ALL_TERMS = new URIImpl(SEARCH_NAMESPACE+"matchAllTerms"); + final boolean DEFAULT_MATCH_ALL_TERMS = false; + + final boolean DEFAULT_PREFIX_MATCH = false; + /** + * The default timeout for a free text search (milliseconds). + */ + final long DEFAULT_TIMEOUT = Long.MAX_VALUE; + + /** * Sesame has the notion of a "null" graph. Any time you insert a statement * into a quad store and the context position is not specified, it is * actually inserted into this "null" graph. If SPARQL <code>DATASET</code> Modified: branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestFullTextIndex.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestFullTextIndex.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-rdf/src/test/com/bigdata/rdf/lexicon/TestFullTextIndex.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -136,8 +136,9 @@ minCosine, 1.0d/* maxCosine */, Integer.MAX_VALUE/* maxRank */, false/* matchAllTerms */, - 2L/* timeout */, - TimeUnit.SECONDS); + Long.MAX_VALUE,//2L/* timeout */, + TimeUnit.MILLISECONDS// TimeUnit.SECONDS + ); // assertEquals("#hits", (long) expected.length, itr.size()); Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataEvaluationStrategyImpl3.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -30,7 +30,6 @@ import org.openrdf.query.algebra.BNodeGenerator; import org.openrdf.query.algebra.Bound; import org.openrdf.query.algebra.Compare; -import org.openrdf.query.algebra.Compare.CompareOp; import org.openrdf.query.algebra.CompareAll; import org.openrdf.query.algebra.CompareAny; import org.openrdf.query.algebra.Datatype; @@ -64,7 +63,6 @@ import org.openrdf.query.algebra.Regex; import org.openrdf.query.algebra.SameTerm; import org.openrdf.query.algebra.StatementPattern; -import org.openrdf.query.algebra.StatementPattern.Scope; import org.openrdf.query.algebra.Str; import org.openrdf.query.algebra.TupleExpr; import org.openrdf.query.algebra.UnaryTupleOperator; @@ -72,6 +70,8 @@ import org.openrdf.query.algebra.ValueConstant; import org.openrdf.query.algebra.ValueExpr; import org.openrdf.query.algebra.Var; +import org.openrdf.query.algebra.Compare.CompareOp; +import org.openrdf.query.algebra.StatementPattern.Scope; import org.openrdf.query.algebra.evaluation.impl.EvaluationStrategyImpl; import org.openrdf.query.algebra.evaluation.iterator.FilterIterator; import org.openrdf.query.algebra.helpers.QueryModelVisitorBase; @@ -83,12 +83,12 @@ import com.bigdata.bop.IConstant; import com.bigdata.bop.IConstraint; import com.bigdata.bop.IPredicate; -import com.bigdata.bop.IPredicate.Annotations; import com.bigdata.bop.IValueExpression; import com.bigdata.bop.IVariable; import com.bigdata.bop.IVariableOrConstant; import com.bigdata.bop.NV; import com.bigdata.bop.PipelineOp; +import com.bigdata.bop.IPredicate.Annotations; import com.bigdata.bop.ap.Predicate; import com.bigdata.bop.bindingSet.ListBindingSet; import com.bigdata.bop.constraint.INBinarySearch; @@ -109,7 +109,6 @@ import com.bigdata.rdf.internal.constraints.IsLiteralBOp; import com.bigdata.rdf.internal.constraints.IsURIBOp; import com.bigdata.rdf.internal.constraints.MathBOp; -import com.bigdata.rdf.internal.constraints.MathBOp.MathOp; import com.bigdata.rdf.internal.constraints.NotBOp; import com.bigdata.rdf.internal.constraints.OrBOp; import com.bigdata.rdf.internal.constraints.RangeBOp; @@ -117,15 +116,16 @@ import com.bigdata.rdf.internal.constraints.SPARQLConstraint; import com.bigdata.rdf.internal.constraints.SameTermBOp; import com.bigdata.rdf.internal.constraints.StrBOp; +import com.bigdata.rdf.internal.constraints.MathBOp.MathOp; import com.bigdata.rdf.lexicon.LexiconRelation; import com.bigdata.rdf.model.BigdataValue; import com.bigdata.rdf.sail.BigdataSail.Options; import com.bigdata.rdf.sail.sop.SOp; import com.bigdata.rdf.sail.sop.SOp2BOpUtility; import com.bigdata.rdf.sail.sop.SOpTree; -import com.bigdata.rdf.sail.sop.SOpTree.SOpGroup; import com.bigdata.rdf.sail.sop.SOpTreeBuilder; import com.bigdata.rdf.sail.sop.UnsupportedOperatorException; +import com.bigdata.rdf.sail.sop.SOpTree.SOpGroup; import com.bigdata.rdf.spo.DefaultGraphSolutionExpander; import com.bigdata.rdf.spo.ExplicitSPOFilter; import com.bigdata.rdf.spo.ISPO; @@ -2181,12 +2181,12 @@ final Iterator<IHit> itr = (Iterator)database.getLexiconRelation() .getSearchEngine().search(label, languageCode, - false/* prefixMatch */, - 0d/* minCosine */, - 1.0d/* maxCosine */, - Integer.MAX_VALUE/* maxRank */, - false/* matchAllTerms */, - 0L/* timeout */, + BD.DEFAULT_PREFIX_MATCH,//false/* prefixMatch */, + BD.DEFAULT_MIN_RELEVANCE,//0d/* minCosine */, + BD.DEFAULT_MAX_RELEVANCE,//1.0d/* maxCosine */, + BD.DEFAULT_MAX_HITS,//Integer.MAX_VALUE/* maxRank */, + BD.DEFAULT_MATCH_ALL_TERMS,//false/* matchAllTerms */, + BD.DEFAULT_TIMEOUT,//0/* timeout */, TimeUnit.MILLISECONDS); // ensure that named graphs are handled correctly for quads Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/FreeTextSearchExpander.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -169,23 +169,15 @@ prefixMatch = false; } - /* - * FIXME This is using a constant (1000ms) for the timeout on - * the free text search. That needs to be passed down from the - * SAIL. - * - * @todo Rather than explicitly passing in all of these as - * parameters to the constructor, why not pass them through as - * annotations on the magic predicate? - */ hiterator = textNdx.search(s, query.getLanguage(), prefixMatch, - minRelevance == null ? 0d : minRelevance.doubleValue()/* minCosine */, - maxRelevance == null ? 1.0d : maxRelevance.doubleValue()/* maxCosine */, - maxHits == null ? Integer.MAX_VALUE : maxHits.intValue()+1/* maxRank */, + minRelevance == null ? BD.DEFAULT_MIN_RELEVANCE : minRelevance.doubleValue()/* minCosine */, + maxRelevance == null ? BD.DEFAULT_MAX_RELEVANCE : maxRelevance.doubleValue()/* maxCosine */, + maxHits == null ? BD.DEFAULT_MAX_HITS/*Integer.MAX_VALUE*/ : maxHits.intValue()+1/* maxRank */, matchAllTerms, - 0L/* timeout */, TimeUnit.MILLISECONDS); + BD.DEFAULT_TIMEOUT/*0L*//* timeout */, + TimeUnit.MILLISECONDS); } return hiterator; Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlClient.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlClient.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/bench/NanoSparqlClient.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -866,7 +866,7 @@ public final String queryStr; /** Metadata about each query presentation. */ - public LinkedBlockingQueue<QueryTrial> trials = new LinkedBlockingQueue<QueryTrial>(/* unbounded */); + public final LinkedBlockingQueue<QueryTrial> trials = new LinkedBlockingQueue<QueryTrial>(/* unbounded */); /** * Total elapsed nanoseconds over all {@link QueryTrial}s for this @@ -939,7 +939,7 @@ /** * Order by increasing elapsed time (slowest queries are last). */ - public int compareTo(Score o) { + public int compareTo(final Score o) { if (elapsedNanos < o.elapsedNanos) return -1; if (elapsedNanos > o.elapsedNanos) Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEvaluationStrategyImpl.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEvaluationStrategyImpl.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailEvaluationStrategyImpl.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -706,12 +706,14 @@ System.err.println("\"Mike\" = " + sail.getDatabase().getIV(new LiteralImpl("Mike"))); System.err.println("\"Jane\" = " + sail.getDatabase().getIV(new LiteralImpl("Jane"))); - String query = + final double minRelevance = 0d; + final String query = "select ?s ?label " + "where { " + " ?s <"+RDF.TYPE+"> <"+person+"> . " + // [160, 8, 164], [156, 8, 164] " ?s <"+RDFS.LABEL+"> ?label . " + // [160, 148, 174], [156, 148, 170] " ?label <"+search+"> \"Mi*\" . " + // [174, 0, 0] + " ?label <"+BD.MIN_RELEVANCE+"> \""+minRelevance+"\" . " + "}"; { // evalute it once so i can see it Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestNamedGraphs.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestNamedGraphs.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestNamedGraphs.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -1593,6 +1593,7 @@ public void testSearchQuery() throws Exception { + final double minRelevance = 0d; final BigdataSail sail = getSail(); try { sail.initialize(); @@ -1638,6 +1639,7 @@ "where { " + " ?y <"+ BD.SEARCH+"> \"Chris*\" . " + " ?x <"+ RDFS.LABEL.stringValue() + "> ?y . " + + " ?y <"+BD.MIN_RELEVANCE+"> \""+minRelevance+"\" . " + "}"; final TupleQuery tupleQuery = cxn.prepareTupleQuery( @@ -1671,6 +1673,7 @@ " graph <http://example.org> { " + " ?y <"+ BD.SEARCH+"> \"Chris*\" . " + " ?x <"+ RDFS.LABEL.stringValue() + "> ?y ." + + " ?y <"+BD.MIN_RELEVANCE+"> \""+minRelevance+"\" . " + " } . " + "}"; Modified: branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java =================================================================== --- branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java 2011-05-20 00:12:12 UTC (rev 4529) +++ branches/QUADS_QUERY_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestSearchQuery.java 2011-05-20 15:04:59 UTC (rev 4530) @@ -300,7 +300,9 @@ final Set<Value> expected = new HashSet<Value>(); - expected.add(new LiteralImpl("Yellow Rose")); + // Note: Whether or not this solution is present depends on the + // default value for minCosine. +// expected.add(new LiteralImpl("Yellow Rose")); expected.add(new LiteralImpl("Old Yellow House")); @@ -316,11 +318,13 @@ final BindingSet solution = itr.next(); - System.out.println("solution[" + i + "] : " + solution); + if (log.isInfoEnabled()) + log.info("solution[" + i + "] : " + solution); final Value actual = solution.getValue("X"); - System.out.println("X[" + i + "] = " + actual + " (" + if (log.isInfoEnabled()) + log.info("X[" + i + "] = " + actual + " (" + actual.getClass().getName() + ")"); assertTrue("Not expecting X=" + actual, expected @@ -364,6 +368,15 @@ final URI ENTITY = new URIImpl("http://bigdata.com/system#Entity"); + final String query = "construct {"+// + "?s <" + RDF.TYPE + "> <" + ENTITY + "> ."+// + " } " + "where { "+// + " ?s <" + RDF.TYPE + "> <" + ENTITY + "> ."+// + " ?s ?p ?lit ."+// + " ?lit <" + BD.SEARCH + "> \"systap\" ."+// + " ?lit <" + BD.MIN_RELEVANCE + "> \"0.0\"^^<http://www.w3.org/2001/XMLSchema#double> ."+// + " }"; + // the ontology (nothing is indexed for full text search). final Graph test_restart_1 = new GraphImpl(); { @@ -446,10 +459,6 @@ } { // run the query (free text search) - final String query = "construct { ?s <" + RDF.TYPE + "> <" - + ENTITY + "> . } " + "where { ?s <" + RDF.TYPE - + "> <" + ENTITY + "> . ?s ?p ?lit . ?lit <" - + BD.SEARCH + "> \"systap\" . }"; final RepositoryConnection cxn = repo.getConnection(); try { // silly construct queries, can't guarantee distinct @@ -459,13 +468,9 @@ QueryLanguage.SPARQL, query); graphQuery.evaluate(new StatementCollector(results)); for (Statement stmt : results) { - log.info(stmt); + if(log.isInfoEnabled()) + log.info(stmt); } - /* - * @todo this test is failing : review with MikeP and - * figure out if it is the test or the system under - * test. - */ assertTrue(results.contains(new StatementImpl(SYSTAP, RDF.TYPE, ENTITY))); } finally { @@ -489,10 +494,10 @@ repo.initialize(); { // run the query again - final String query = "construct { ?s <" + RDF.TYPE + "> <" - + ENTITY + "> . } " + "where { ?s <" + RDF.TYPE - + "> <" + ENTITY + "> . ?s ?p ?lit . ?lit <" - + BD.SEARCH + "> \"systap\" . }"; +// final String query = "construct { ?s <" + RDF.TYPE + "> <" +// + ENTITY + "> . } " + "where { ?s <" + RDF.TYPE +// + "> <" + ENTITY + "> . ?s ?p ?lit . ?lit <" +// + BD.SEARCH + "> \"systap\" . }"; final RepositoryConnection cxn = repo.getConnection(); try { // silly construct queries, can't guarantee distinct @@ -502,7 +507,8 @@ QueryLanguage.SPARQL, query); graphQuery.evaluate(new StatementCollector(results)); for (Statement stmt : results) { - log.info(stmt); + if(log.isInfoEnabled()) + log.info(stmt); } assertTrue("Lost commit?", results .contains(new StatementImpl(SYSTAP, RDF.TYPE, @@ -782,9 +788,10 @@ int i = 0; while (result.hasNext()) { - System.err.println(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } - assertTrue("wrong # of results", i == 7); + assertEquals("wrong # of results", 7, i); result = tupleQuery.evaluate(); @@ -795,12 +802,12 @@ final Hiterator<IHit> hits = search.search(searchQuery, null, // languageCode - false, // prefixMatch - 0d, // minCosine - 1.0d, // maxCosine - 10000, // maxRank (=maxResults + 1) - false, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_PREFIX_MATCH,//false, // prefixMatch + BD.DEFAULT_MIN_RELEVANCE,//0d, // minCosine + BD.DEFAULT_MAX_RELEVANCE,//1.0d, // maxCosine + BD.DEFAULT_MAX_HITS,//10000, // maxRank (=maxResults + 1) + BD.DEFAULT_MATCH_ALL_TERMS,//false, // matchAllTerms + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); @@ -814,7 +821,8 @@ new BindingImpl("s", s), new BindingImpl("o", o), new BindingImpl("score", score)); - System.err.println(bs); + if(log.isInfoEnabled()) + log.info(bs); answer.add(bs); } @@ -845,9 +853,10 @@ int i = 0; while (result.hasNext()) { - System.err.println(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } - assertTrue("wrong # of results", i == 5); + assertEquals("wrong # of results", 5, i); result = tupleQuery.evaluate(); @@ -858,12 +867,12 @@ final Hiterator<IHit> hits = search.search(searchQuery, null, // languageCode - false, // prefixMatch - 0d, // minCosine - 1.0d, // maxCosine + BD.DEFAULT_PREFIX_MATCH,//false, // prefixMatch + BD.DEFAULT_MIN_RELEVANCE,//0d, // minCosine + BD.DEFAULT_MAX_RELEVANCE,//1.0d, // maxCosine maxHits+1, // maxRank (=maxResults + 1) - false, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_MATCH_ALL_TERMS,//false, // matchAllTerms + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); @@ -877,7 +886,8 @@ new BindingImpl("s", s), new BindingImpl("o", o), new BindingImpl("score", score)); - System.err.println(bs); + if(log.isInfoEnabled()) + log.info(bs); answer.add(bs); } @@ -910,9 +920,10 @@ int i = 0; while (result.hasNext()) { - System.err.println(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } - assertTrue("wrong # of results", i == 2); + assertEquals("wrong # of results", 2, i); result = tupleQuery.evaluate(); @@ -923,12 +934,12 @@ final Hiterator<IHit> hits = search.search(searchQuery, null, // languageCode - false, // prefixMatch + BD.DEFAULT_PREFIX_MATCH,//false, // prefixMatch minRelevance, // minCosine maxRelevance, // maxCosine - 10000, // maxRank (=maxResults + 1) - false, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_MAX_HITS,//10000, // maxRank (=maxResults + 1) + BD.DEFAULT_MATCH_ALL_TERMS,//false, // matchAllTerms + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); @@ -942,7 +953,8 @@ new BindingImpl("s", s), new BindingImpl("o", o), new BindingImpl("score", score)); - System.err.println(bs); + if(log.isInfoEnabled()) + log.info(bs); answer.add(bs); } @@ -969,7 +981,8 @@ "} " + "order by desc(?score)"; - log.info("\n"+query); + if(log.isInfoEnabled()) + log.info("\n"+query); final TupleQuery tupleQuery = cxn.prepareTupleQuery(QueryLanguage.SPARQL, query); @@ -978,9 +991,10 @@ int i = 0; while (result.hasNext()) { - log.info(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } - assertTrue("wrong # of results: " + i, i == 2); + assertEquals("wrong # of results: " + i, 2, i); result = tupleQuery.evaluate(); @@ -991,12 +1005,12 @@ final Hiterator<IHit> hits = search.search(searchQuery, null, // languageCode - false, // prefixMatch + BD.DEFAULT_PREFIX_MATCH,//false, // prefixMatch minRelevance, // minCosine maxRelevance, // maxCosine - 10000, // maxRank (=maxResults + 1) - false, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_MAX_HITS,//10000, // maxRank (=maxResults + 1) + BD.DEFAULT_MATCH_ALL_TERMS,//false, // matchAllTerms + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); @@ -1039,7 +1053,8 @@ "} " + "order by desc(?score)"; - log.info("\n"+query); + if(log.isInfoEnabled()) + log.info("\n"+query); final TupleQuery tupleQuery = cxn.prepareTupleQuery(QueryLanguage.SPARQL, query); @@ -1048,9 +1063,10 @@ int i = 0; while (result.hasNext()) { - log.info(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } - assertTrue("wrong # of results: " + i, i == 3); + assertEquals("wrong # of results: " + i, 3, i); result = tupleQuery.evaluate(); @@ -1064,9 +1080,9 @@ true, // prefixMatch minRelevance, // minCosine maxRelevance, // maxCosine - 10000, // maxRank (=maxResults + 1) - false, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_MAX_HITS,//10000, // maxRank (=maxResults + 1) + BD.DEFAULT_MATCH_ALL_TERMS,//false, // matchAllTerms + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); @@ -1092,7 +1108,7 @@ final String searchQuery = "to*"; final double minRelevance = 0.0d; - final double maxRelevance = 1.0d; + final double maxRelevance = BD.DEFAULT_MAX_RELEVANCE;//1.0d; final String query = "select ?s ?o ?score " + @@ -1101,13 +1117,14 @@ " ?s <"+RDFS.LABEL+"> ?o . " + " ?o <"+BD.SEARCH+"> \""+searchQuery+"\" . " + " ?o <"+BD.RELEVANCE+"> ?score . " + -// " ?o <"+BD.MIN_RELEVANCE+"> \""+minRelevance+"\" . " + + " ?o <"+BD.MIN_RELEVANCE+"> \""+minRelevance+"\" . " + // " ?o <"+BD.MAX_HITS+"> \"5\" . " + // " filter regex(?o, \""+searchQuery+"\") " + "} " + "order by desc(?score)"; - log.info("\n"+query); + if(log.isInfoEnabled()) + log.info("\n"+query); final TupleQuery tupleQuery = cxn.prepareTupleQuery(QueryLanguage.SPARQL, query); @@ -1116,9 +1133,10 @@ int i = 0; while (result.hasNext()) { - log.info(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } - assertTrue("wrong # of results: " + i, i == 1); + assertEquals("wrong # of results: " + i, 1, i); result = tupleQuery.evaluate(); @@ -1132,9 +1150,9 @@ true, // prefixMatch minRelevance, // minCosine maxRelevance, // maxCosine - 10000, // maxRank (=maxResults + 1) - false, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_MAX_HITS,//10000, // maxRank (=maxResults + 1) + BD.DEFAULT_MATCH_ALL_TERMS,//false, // matchAllTerms + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); @@ -1180,7 +1198,8 @@ int i = 0; while (result.hasNext()) { - log.info(i++ + ": " + result.next().toString()); + if(log.isInfoEnabled()) + log.info(i++ + ": " + result.next().toString()); } // assertTrue("wrong # of results: " + i, i == 1); @@ -1196,9 +1215,9 @@ true, // prefixMatch minRelevance, // minCosine maxRelevance, // maxCosine - 10000, // maxRank (=maxResults + 1) + BD.DEFAULT_MAX_HITS,//10000, // maxRank (=maxResults + 1) true, // matchAllTerms - 1000L, // timeout + BD.DEFAULT_TIMEOUT,//1000L, // timeout TimeUnit.MILLISECONDS // unit ); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |