This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
| 2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
| 2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
| 2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
| 2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: husdon <no...@no...> - 2010-10-18 15:08:19
|
See <http://localhost/job/BigData/164/> |
|
From: husdon <no...@no...> - 2010-10-15 13:04:13
|
See <http://localhost/job/BigData/163/changes> Changes: [thompsonbry] Added note to the effect that you must run "ant bundleJar" in the top-level directory before using any of the ant scripts in the bigdata-perf module. ------------------------------------------ [...truncated 7502 lines...] javadoc/com/bigdata/striterator/ChunkedFilter.html javadoc/com/bigdata/striterator/ChunkedOrderedStriterator.html javadoc/com/bigdata/striterator/ChunkedResolvingIterator.html javadoc/com/bigdata/striterator/ChunkedStriterator.html javadoc/com/bigdata/striterator/ChunkedWrappedIterator.html javadoc/com/bigdata/striterator/ClosableEmptyIterator.html javadoc/com/bigdata/striterator/ClosableSingleItemIterator.html javadoc/com/bigdata/striterator/CloseableIteratorWrapper.html javadoc/com/bigdata/striterator/DelegateChunkedIterator.html javadoc/com/bigdata/striterator/DistinctFilter.html javadoc/com/bigdata/striterator/EmptyChunkedIterator.html javadoc/com/bigdata/striterator/Filter.html javadoc/com/bigdata/striterator/GenericChunkedOrderedStriterator.html javadoc/com/bigdata/striterator/GenericChunkedStriterator.html javadoc/com/bigdata/striterator/GenericStriterator.html javadoc/com/bigdata/striterator/IChunkConverter.html javadoc/com/bigdata/striterator/IChunkedIterator.html javadoc/com/bigdata/striterator/IChunkedOrderedIterator.html javadoc/com/bigdata/striterator/IChunkedOrderedStriterator.html javadoc/com/bigdata/striterator/IChunkedStriterator.html javadoc/com/bigdata/striterator/ICloseableIterator.html javadoc/com/bigdata/striterator/IFilter.html javadoc/com/bigdata/striterator/IKeyOrder.html javadoc/com/bigdata/striterator/IStriterator.html javadoc/com/bigdata/striterator/MergeFilter.html javadoc/com/bigdata/striterator/PushbackIterator.html javadoc/com/bigdata/striterator/Resolver.html javadoc/com/bigdata/striterator/SingleValueChunkedIterator.html javadoc/com/bigdata/striterator/Striterator.html javadoc/com/bigdata/striterator/package-frame.html javadoc/com/bigdata/striterator/package-summary.html javadoc/com/bigdata/striterator/package-tree.html javadoc/com/bigdata/striterator/package-use.html javadoc/com/bigdata/striterator/class-use/AbstractChunkedResolverator.html javadoc/com/bigdata/striterator/class-use/Appender.html javadoc/com/bigdata/striterator/class-use/ChunkedArrayIterator.html javadoc/com/bigdata/striterator/class-use/ChunkedConvertingIterator.html javadoc/com/bigdata/striterator/class-use/ChunkedFilter.html javadoc/com/bigdata/striterator/class-use/ChunkedOrderedStriterator.html javadoc/com/bigdata/striterator/class-use/ChunkedResolvingIterator.html javadoc/com/bigdata/striterator/class-use/ChunkedStriterator.html javadoc/com/bigdata/striterator/class-use/ChunkedWrappedIterator.html javadoc/com/bigdata/striterator/class-use/ClosableEmptyIterator.html javadoc/com/bigdata/striterator/class-use/ClosableSingleItemIterator.html javadoc/com/bigdata/striterator/class-use/CloseableIteratorWrapper.html javadoc/com/bigdata/striterator/class-use/DelegateChunkedIterator.html javadoc/com/bigdata/striterator/class-use/DistinctFilter.html javadoc/com/bigdata/striterator/class-use/EmptyChunkedIterator.html javadoc/com/bigdata/striterator/class-use/Filter.html javadoc/com/bigdata/striterator/class-use/GenericChunkedOrderedStriterator.html javadoc/com/bigdata/striterator/class-use/GenericChunkedStriterator.html javadoc/com/bigdata/striterator/class-use/GenericStriterator.html javadoc/com/bigdata/striterator/class-use/IChunkConverter.html javadoc/com/bigdata/striterator/class-use/IChunkedIterator.html javadoc/com/bigdata/striterator/class-use/IChunkedOrderedIterator.html javadoc/com/bigdata/striterator/class-use/IChunkedOrderedStriterator.html javadoc/com/bigdata/striterator/class-use/IChunkedStriterator.html javadoc/com/bigdata/striterator/class-use/ICloseableIterator.html javadoc/com/bigdata/striterator/class-use/IFilter.html javadoc/com/bigdata/striterator/class-use/IKeyOrder.html javadoc/com/bigdata/striterator/class-use/IStriterator.html javadoc/com/bigdata/striterator/class-use/MergeFilter.html javadoc/com/bigdata/striterator/class-use/PushbackIterator.html javadoc/com/bigdata/striterator/class-use/Resolver.html javadoc/com/bigdata/striterator/class-use/SingleValueChunkedIterator.html javadoc/com/bigdata/striterator/class-use/Striterator.html javadoc/com/bigdata/util/ByteBufferBitVector.html javadoc/com/bigdata/util/CSVReader.Header.html javadoc/com/bigdata/util/CSVReader.html javadoc/com/bigdata/util/ChecksumError.html javadoc/com/bigdata/util/ChecksumUtility.html javadoc/com/bigdata/util/HTMLUtility.html javadoc/com/bigdata/util/HybridTimestampFactory.html javadoc/com/bigdata/util/InnerCause.html javadoc/com/bigdata/util/MillisecondTimestampFactory.html javadoc/com/bigdata/util/NT.html javadoc/com/bigdata/util/NV.html javadoc/com/bigdata/util/NanosecondTimestampFactory.html javadoc/com/bigdata/util/ReverseLongComparator.html javadoc/com/bigdata/util/Util.WaitOnInterruptThread.html javadoc/com/bigdata/util/Util.html javadoc/com/bigdata/util/package-frame.html javadoc/com/bigdata/util/package-summary.html javadoc/com/bigdata/util/package-tree.html javadoc/com/bigdata/util/package-use.html javadoc/com/bigdata/util/class-use/ByteBufferBitVector.html javadoc/com/bigdata/util/class-use/CSVReader.Header.html javadoc/com/bigdata/util/class-use/CSVReader.html javadoc/com/bigdata/util/class-use/ChecksumError.html javadoc/com/bigdata/util/class-use/ChecksumUtility.html javadoc/com/bigdata/util/class-use/HTMLUtility.html javadoc/com/bigdata/util/class-use/HybridTimestampFactory.html javadoc/com/bigdata/util/class-use/InnerCause.html javadoc/com/bigdata/util/class-use/MillisecondTimestampFactory.html javadoc/com/bigdata/util/class-use/NT.html javadoc/com/bigdata/util/class-use/NV.html javadoc/com/bigdata/util/class-use/NanosecondTimestampFactory.html javadoc/com/bigdata/util/class-use/ReverseLongComparator.html javadoc/com/bigdata/util/class-use/Util.WaitOnInterruptThread.html javadoc/com/bigdata/util/class-use/Util.html javadoc/com/bigdata/util/concurrent/AbstractHaltableProcess.html javadoc/com/bigdata/util/concurrent/Computable.html javadoc/com/bigdata/util/concurrent/DaemonThreadFactory.html javadoc/com/bigdata/util/concurrent/DeltaMovingAverageTask.html javadoc/com/bigdata/util/concurrent/ExecutionExceptions.html javadoc/com/bigdata/util/concurrent/ExecutionHelper.html javadoc/com/bigdata/util/concurrent/IQueueCounters.ITaskCounters.html javadoc/com/bigdata/util/concurrent/IQueueCounters.IThreadPoolExecutorCounters.html javadoc/com/bigdata/util/concurrent/IQueueCounters.IThreadPoolExecutorTaskCounters.html javadoc/com/bigdata/util/concurrent/IQueueCounters.IWriteServiceExecutorCounters.html javadoc/com/bigdata/util/concurrent/IQueueCounters.html javadoc/com/bigdata/util/concurrent/Latch.html javadoc/com/bigdata/util/concurrent/LatchedExecutor.html javadoc/com/bigdata/util/concurrent/MappedTaskExecutor.html javadoc/com/bigdata/util/concurrent/Memoizer.html javadoc/com/bigdata/util/concurrent/MovingAverageTask.html javadoc/com/bigdata/util/concurrent/QueueSizeMovingAverageTask.html javadoc/com/bigdata/util/concurrent/ShutdownHelper.html javadoc/com/bigdata/util/concurrent/TaskCounters.html javadoc/com/bigdata/util/concurrent/ThreadPoolExecutorBaseStatisticsTask.html javadoc/com/bigdata/util/concurrent/ThreadPoolExecutorStatisticsTask.html javadoc/com/bigdata/util/concurrent/WriteTaskCounters.html javadoc/com/bigdata/util/concurrent/package-frame.html javadoc/com/bigdata/util/concurrent/package-summary.html javadoc/com/bigdata/util/concurrent/package-tree.html javadoc/com/bigdata/util/concurrent/package-use.html javadoc/com/bigdata/util/concurrent/class-use/AbstractHaltableProcess.html javadoc/com/bigdata/util/concurrent/class-use/Computable.html javadoc/com/bigdata/util/concurrent/class-use/DaemonThreadFactory.html javadoc/com/bigdata/util/concurrent/class-use/DeltaMovingAverageTask.html javadoc/com/bigdata/util/concurrent/class-use/ExecutionExceptions.html javadoc/com/bigdata/util/concurrent/class-use/ExecutionHelper.html javadoc/com/bigdata/util/concurrent/class-use/IQueueCounters.ITaskCounters.html javadoc/com/bigdata/util/concurrent/class-use/IQueueCounters.IThreadPoolExecutorCounters.html javadoc/com/bigdata/util/concurrent/class-use/IQueueCounters.IThreadPoolExecutorTaskCounters.html javadoc/com/bigdata/util/concurrent/class-use/IQueueCounters.IWriteServiceExecutorCounters.html javadoc/com/bigdata/util/concurrent/class-use/IQueueCounters.html javadoc/com/bigdata/util/concurrent/class-use/Latch.html javadoc/com/bigdata/util/concurrent/class-use/LatchedExecutor.html javadoc/com/bigdata/util/concurrent/class-use/MappedTaskExecutor.html javadoc/com/bigdata/util/concurrent/class-use/Memoizer.html javadoc/com/bigdata/util/concurrent/class-use/MovingAverageTask.html javadoc/com/bigdata/util/concurrent/class-use/QueueSizeMovingAverageTask.html javadoc/com/bigdata/util/concurrent/class-use/ShutdownHelper.html javadoc/com/bigdata/util/concurrent/class-use/TaskCounters.html javadoc/com/bigdata/util/concurrent/class-use/ThreadPoolExecutorBaseStatisticsTask.html javadoc/com/bigdata/util/concurrent/class-use/ThreadPoolExecutorStatisticsTask.html javadoc/com/bigdata/util/concurrent/class-use/WriteTaskCounters.html javadoc/com/bigdata/util/config/ConfigDeployUtil.html javadoc/com/bigdata/util/config/ConfigurationUtil.html javadoc/com/bigdata/util/config/Log4jLoggingHandler.html javadoc/com/bigdata/util/config/LogUtil.html javadoc/com/bigdata/util/config/NicUtil.html javadoc/com/bigdata/util/config/package-frame.html javadoc/com/bigdata/util/config/package-summary.html javadoc/com/bigdata/util/config/package-tree.html javadoc/com/bigdata/util/config/package-use.html javadoc/com/bigdata/util/config/class-use/ConfigDeployUtil.html javadoc/com/bigdata/util/config/class-use/ConfigurationUtil.html javadoc/com/bigdata/util/config/class-use/Log4jLoggingHandler.html javadoc/com/bigdata/util/config/class-use/LogUtil.html javadoc/com/bigdata/util/config/class-use/NicUtil.html javadoc/com/bigdata/util/httpd/AbstractHTTPD.html javadoc/com/bigdata/util/httpd/NanoHTTPD.Response.html javadoc/com/bigdata/util/httpd/NanoHTTPD.html javadoc/com/bigdata/util/httpd/package-frame.html javadoc/com/bigdata/util/httpd/package-summary.html javadoc/com/bigdata/util/httpd/package-tree.html javadoc/com/bigdata/util/httpd/package-use.html javadoc/com/bigdata/util/httpd/class-use/AbstractHTTPD.html javadoc/com/bigdata/util/httpd/class-use/NanoHTTPD.Response.html javadoc/com/bigdata/util/httpd/class-use/NanoHTTPD.html javadoc/com/bigdata/zookeeper/AbstractZNodeConditionWatcher.html javadoc/com/bigdata/zookeeper/AbstractZooPrimitive.html javadoc/com/bigdata/zookeeper/AbstractZooQueue.html javadoc/com/bigdata/zookeeper/DumpZookeeper.html javadoc/com/bigdata/zookeeper/HierarchicalZNodeWatcher.html javadoc/com/bigdata/zookeeper/HierarchicalZNodeWatcherFlags.html javadoc/com/bigdata/zookeeper/UnknownChildrenWatcher.html javadoc/com/bigdata/zookeeper/ZLock.html javadoc/com/bigdata/zookeeper/ZLockImpl.ZLockWatcher.html javadoc/com/bigdata/zookeeper/ZLockImpl.html javadoc/com/bigdata/zookeeper/ZLockNodeInvalidatedException.html javadoc/com/bigdata/zookeeper/ZNodeCreatedWatcher.html javadoc/com/bigdata/zookeeper/ZNodeDeletedWatcher.html javadoc/com/bigdata/zookeeper/ZNodeLockWatcher.ZLockImpl.html javadoc/com/bigdata/zookeeper/ZNodeLockWatcher.html javadoc/com/bigdata/zookeeper/ZooBarrier.html javadoc/com/bigdata/zookeeper/ZooElection.html javadoc/com/bigdata/zookeeper/ZooHelper.html javadoc/com/bigdata/zookeeper/ZooKeeperAccessor.html javadoc/com/bigdata/zookeeper/ZooQueue.html javadoc/com/bigdata/zookeeper/ZooResourceLockService.html javadoc/com/bigdata/zookeeper/package-frame.html javadoc/com/bigdata/zookeeper/package-summary.html javadoc/com/bigdata/zookeeper/package-tree.html javadoc/com/bigdata/zookeeper/package-use.html javadoc/com/bigdata/zookeeper/class-use/AbstractZNodeConditionWatcher.html javadoc/com/bigdata/zookeeper/class-use/AbstractZooPrimitive.html javadoc/com/bigdata/zookeeper/class-use/AbstractZooQueue.html javadoc/com/bigdata/zookeeper/class-use/DumpZookeeper.html javadoc/com/bigdata/zookeeper/class-use/HierarchicalZNodeWatcher.html javadoc/com/bigdata/zookeeper/class-use/HierarchicalZNodeWatcherFlags.html javadoc/com/bigdata/zookeeper/class-use/UnknownChildrenWatcher.html javadoc/com/bigdata/zookeeper/class-use/ZLock.html javadoc/com/bigdata/zookeeper/class-use/ZLockImpl.ZLockWatcher.html javadoc/com/bigdata/zookeeper/class-use/ZLockImpl.html javadoc/com/bigdata/zookeeper/class-use/ZLockNodeInvalidatedException.html javadoc/com/bigdata/zookeeper/class-use/ZNodeCreatedWatcher.html javadoc/com/bigdata/zookeeper/class-use/ZNodeDeletedWatcher.html javadoc/com/bigdata/zookeeper/class-use/ZNodeLockWatcher.ZLockImpl.html javadoc/com/bigdata/zookeeper/class-use/ZNodeLockWatcher.html javadoc/com/bigdata/zookeeper/class-use/ZooBarrier.html javadoc/com/bigdata/zookeeper/class-use/ZooElection.html javadoc/com/bigdata/zookeeper/class-use/ZooHelper.html javadoc/com/bigdata/zookeeper/class-use/ZooKeeperAccessor.html javadoc/com/bigdata/zookeeper/class-use/ZooQueue.html javadoc/com/bigdata/zookeeper/class-use/ZooResourceLockService.html javadoc/org/apache/system/CPUParser.html javadoc/org/apache/system/Linux.html javadoc/org/apache/system/SystemUtil.html javadoc/org/apache/system/Windows2000.html javadoc/org/apache/system/Windows95.html javadoc/org/apache/system/Windows98.html javadoc/org/apache/system/WindowsNT.html javadoc/org/apache/system/WindowsXP.html javadoc/org/apache/system/package-frame.html javadoc/org/apache/system/package-summary.html javadoc/org/apache/system/package-tree.html javadoc/org/apache/system/package-use.html javadoc/org/apache/system/class-use/CPUParser.html javadoc/org/apache/system/class-use/Linux.html javadoc/org/apache/system/class-use/SystemUtil.html javadoc/org/apache/system/class-use/Windows2000.html javadoc/org/apache/system/class-use/Windows95.html javadoc/org/apache/system/class-use/Windows98.html javadoc/org/apache/system/class-use/WindowsNT.html javadoc/org/apache/system/class-use/WindowsXP.html javadoc/org/openrdf/rio/rdfxml/RDFXMLParser.html javadoc/org/openrdf/rio/rdfxml/RDFXMLWriter.html javadoc/org/openrdf/rio/rdfxml/package-frame.html javadoc/org/openrdf/rio/rdfxml/package-summary.html javadoc/org/openrdf/rio/rdfxml/package-tree.html javadoc/org/openrdf/rio/rdfxml/package-use.html javadoc/org/openrdf/rio/rdfxml/class-use/RDFXMLParser.html javadoc/org/openrdf/rio/rdfxml/class-use/RDFXMLWriter.html javadoc/resources/inherit.gif POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 |
|
From: Bryan T. <br...@sy...> - 2010-10-11 15:22:10
|
[Resend to the developer's list.] Fred, These are internal uses of the B+Tree so I do not believe that it is inappropriate to subclass. Have you verified that these changes do not break binary compatibility? We have a lot of data out there using the CommitTimeIndex and the CommitRecordIndex on the standalone Journal and compatibility there is paramount. The "unused" CounterSetBTree is part of an experiment and as such does not belong in the trunk. However, we do need to move the performance counters onto appropriate index structures at some point for better scaling. Thanks, Bryan > -----Original Message----- > From: fko...@us... > [mailto:fko...@us...] > Sent: Friday, October 08, 2010 6:22 PM > To: big...@li... > Subject: [Bigdata-commit] SF.net SVN: bigdata:[3761] > branches/maven_scaleout/bigdata-core > > Revision: 3761 > > http://bigdata.svn.sourceforge.net/bigdata/?rev=3761&view=rev > Author: fkoliver > Date: 2010-10-08 22:21:56 +0000 (Fri, 08 Oct 2010) > > Log Message: > ----------- > > Change some subclasses of BTree into uses of BTree: > EventBTree, CommitTimeIndex, CommitRecordIndex, JournalIndex, > IndexSegmentIndex > > Users of these specialized trees should see only a specialized set of > methods rather than the full inheritance from BTree. These classes > should hide key/value conversion. Users should never need to > see ITuple > or ITupleSerializer. > > Removed unused CounterSetBTree. > > Modified Paths: > -------------- > branches/maven_scaleout/bigdata-core/pom.xml > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/AbstractBTree.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/BTree.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/IndexSegment.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndexView.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/Node.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/ReadOnlyIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /cache/HardReferenceGlobalLRURecycler.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /cache/LRUNexus.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /counters/httpd/DummyEventReportingService.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /journal/AbstractJournal.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /journal/CommitRecordIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /journal/DumpJournal.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /loadbalancer/EmbeddedLoadBalancer.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /resources/IndexSegmentIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /resources/JournalIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /resources/ResourceEvents.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /resources/StoreManager.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/AbstractTransactionService.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/CommitTimeIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/DistributedTransactionService.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/EventReceiver.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/LoadBalancerService.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/MetadataIndexCache.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /transaction/EmbeddedTransactionService.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /counters/TestAll.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /journal/TestCommitRecordIndex.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /service/TestDistributedTransactionServiceRestart.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /service/TestEventReceiver.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /service/TestSnapshotHelper.java > > Added Paths: > ----------- > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /service/EventBTree.java > > Removed Paths: > ------------- > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/DelegateIndex.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /counters/query/CounterSetBTreeSelector.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /counters/query/CounterSetLoader.java > > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /counters/store/CounterSetBTree.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /counters/store/TestAll.java > > branches/maven_scaleout/bigdata-core/src/test/java/com/bigdata > /counters/store/TestCounterSetBTree.java > > Modified: branches/maven_scaleout/bigdata-core/pom.xml > =================================================================== > --- branches/maven_scaleout/bigdata-core/pom.xml > 2010-10-08 19:57:43 UTC (rev 3760) > +++ branches/maven_scaleout/bigdata-core/pom.xml > 2010-10-08 22:21:56 UTC (rev 3761) > @@ -60,6 +60,8 @@ > java files were getting > recompiled and put into the bigdata jar. This setting forces > javac to only look for source in the current maven source directory. > --> > > <sourcepath>${project.build.sourceDirectory}</sourcepath> > + <!-- <Xlint></Xlint> --> > + <!-- <Xlint:unchecked></Xlint:unchecked> --> > </compilerArguments> > </configuration> > </plugin> > @@ -141,7 +143,7 @@ > <!-- These system properties are > required by the unit tests. --> > <systemPropertyVariables> > > <java.security.policy>${java.security.policy}</java.security.policy> > - > <java.net.preferIPv4Stack>{java.net.preferIPv4Stack}"</java.ne > t.preferIPv4Stack> > + > <java.net.preferIPv4Stack>{java.net.preferIPv4Stack}</java.net > .preferIPv4Stack> > > <log4j.configuration>${log4j.configuration}</log4j.configuration> > > > <app.home>${app.home}</app.home> <!-- This is the deployment > directory, easily accessed by the DataFinder class. --> > @@ -160,7 +162,7 @@ > > <fastutil.jar>${deploy.lib}/fastutil.jar</fastutil.jar> > > <icu4j.jar>${deploy.lib}/icu4j.jar</icu4j.jar> > > <jsk-lib.jar>${deploy.lib}/jsk-lib.jar</jsk-lib.jar> > - > <jsk-platform.jar>${deploy.lib}jsk-platform.jar</jsk-platform.jar> > + > <jsk-platform.jar>${deploy.lib}/jsk-platform.jar</jsk-platform.jar> > > <log4j.jar>${deploy.lib}/log4j.jar</log4j.jar> > > <iris.jar>${deploy.lib}/iris.jar</iris.jar> > > <jgrapht.jar>${deploy.lib}/jgrapht.jar</jgrapht.jar> > @@ -168,6 +170,13 @@ > > <slf4j.jar>${deploy.lib}/slf4j.jar</slf4j.jar> > > <nxparser.jar>${deploy.lib}/nxparser.jar</nxparser.jar> > > <zookeeper.jar>${deploy.lib}/zookeeper.jar</zookeeper.jar> > + > + > + > + <basedir>${deploy.dir}/testing</basedir> > + > + > + > </systemPropertyVariables> > </configuration> > </execution> > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/AbstractBTree.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/AbstractBTree.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/AbstractBTree.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -128,7 +128,6 @@ > * </p> > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > * > * @see KeyBuilder > */ > @@ -164,7 +163,7 @@ > /** > * Log for btree opeations. > */ > - protected static final Logger log = > Logger.getLogger(AbstractBTree.class); > + private static final Logger log = > Logger.getLogger(AbstractBTree.class); > > /** > * True iff the {@link #log} level is INFO or less. > @@ -338,7 +337,6 @@ > * > * @author <a > href="mailto:tho...@us...">Bryan > * Thompson</a> > - * @version $Id$ > */ > static class ChildMemoizer extends > Memoizer<LoadChildRequest/* request */, > AbstractNode<?>/* child */> { > @@ -1362,7 +1360,6 @@ > * Static class since must be {@link Serializable}. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > static final class TransientResourceMetadata implements > IResourceMetadata { > > @@ -2588,6 +2585,7 @@ > > } > > + //fko ===== needs generic type ===== > final public ITupleIterator rangeIterator() { > > return rangeIterator(null, null); > @@ -2602,6 +2600,7 @@ > * @param toKey > * @return > */ > + //fko ===== needs generic type ===== > final public ITupleIterator rangeIterator(Object > fromKey, Object toKey) { > > fromKey = fromKey == null ? null : > metadata.getTupleSerializer() > @@ -2614,6 +2613,7 @@ > > } > > + //fko ===== needs generic type ===== > final public ITupleIterator rangeIterator(byte[] > fromKey, byte[] toKey) { > > return rangeIterator(fromKey, toKey, 0/* capacity */, > @@ -2665,6 +2665,7 @@ > * @param toKey > * @return > */ > + //fko ===== needs generic type ===== > final public ITupleIterator rangeIterator(Object > fromKey, Object toKey, > final int capacity,// > final int flags,// > @@ -2705,208 +2706,13 @@ > * @todo add support to the iterator construct for > filtering by a tuple > * revision timestamp range. > */ > - public ITupleIterator rangeIterator(// > - final byte[] fromKey,// > - final byte[] toKey,// > - final int capacityIsIgnored,// > - final int flags,// > - final IFilterConstructor filter// > - ) { > + //fko ===== needs generic type ===== > + abstract public ITupleIterator rangeIterator(final > byte[] fromKey, > + final byte[] toKey, > + final int > capacityIsIgnored, > + final int flags, > + final > IFilterConstructor filter); > > -// btreeCounters.nrangeIterator.incrementAndGet(); > - > - /* > - * Does the iterator declare that it will not write > back on the index? > - */ > - final boolean readOnly = ((flags & > IRangeQuery.READONLY) != 0); > - > - if (readOnly && ((flags & IRangeQuery.REMOVEALL) != 0)) { > - > - throw new IllegalArgumentException(); > - > - } > - > - /* > - * Note: this does not work out since it is not so > easy to determine when > - * the iterator is a point test as toKey is the > exclusive upper bound. > - */ > -// * Note: this method will automatically apply the > optional bloom filter to > -// * reject range iterator requests that correspond > to a point test. However > -// * this can only be done when the fromKey and toKey > are both non-null and > -// * equals and further when the iterator was not > requested with any options > -// * that would permit concurrent modification of the index. > -// if (isBloomFilter() > -// && fromKey != null > -// && toKey != null > -// && (readOnly || (((flags & REMOVEALL) == > 0) && ((flags & CURSOR) == 0))) > -// && BytesUtil.bytesEqual(fromKey, toKey)) { > -// > -// /* > -// * Do a fast rejection test using the bloom filter. > -// */ > -// if(!getBloomFilter().contains(fromKey)) { > -// > -// /* > -// * The key is known to not be in the index > so return an empty > -// * iterator. > -// */ > -// return EmptyTupleIterator.INSTANCE; > -// > -// } > -// > -// /* > -// * Since the bloom filter accepts the key we > fall through into the > -// * normal iterator logic. Using this code path > is still possible > -// * that the filter gave us a false positive > and that the key is not > -// * (in fact) in the index. Either way, the > logic below will sort > -// * things out. > -// */ > -// > -// } > - > - /* > - * Figure out what base iterator implementation to > use. We will layer > - * on the optional filter(s) below. > - */ > - ITupleIterator src; > - > - if ((this instanceof BTree) && ((flags & REVERSE) == 0) > - && ((flags & REMOVEALL) == 0) && ((flags & > CURSOR) == 0)) { > - > - /* > - * Use the recursion-based striterator since it > is faster for a > - * BTree (but not for an IndexSegment). > - * > - * Note: The recursion-based striterator does > not support remove()! > - * > - * @todo we could pass in the Tuple here to make > the APIs a bit more > - * consistent across the recursion-based and the > cursor based > - * iterators. > - * > - * @todo when the capacity is one and REVERSE is > specified then we > - * can optimize this using a reverse traversal > striterator - this > - * will have lower overhead than the cursor for > the BTree (but not > - * for an IndexSegment). > - */ > - > -// src = fastForwardIterator(fromKey, toKey, > capacity, flags); > - > - src = getRoot().rangeIterator(fromKey, toKey, flags); > - > - } else { > - > - final Tuple tuple = new Tuple(this, flags); > - > - if (this instanceof IndexSegment) { > - > - final IndexSegment seg = (IndexSegment) this; > - > - /* > - * @todo we could scan the list of pools and > chose the best fit > - * pool and then allocate a buffer from that > pool. Best fit > - * would mean either the byte range fits > without "too much" slop > - * or the #of reads will have to perform is > not too large. We > - * might also want to limit the maximum size > of the reads. > - */ > - > -// final DirectBufferPool pool = > DirectBufferPool.INSTANCE_10M; > - final DirectBufferPool pool = > DirectBufferPool.INSTANCE; > - > - if (true > - && ((flags & REVERSE) == 0) > - && ((flags & CURSOR) == 0) > - && > (seg.getStore().getCheckpoint().maxNodeOrLeafLength <= pool > - .getBufferCapacity()) > - && ((rangeCount(fromKey, toKey) / > branchingFactor) > 2)) { > - > - src = new > IndexSegmentMultiBlockIterator(seg, pool, > - fromKey, toKey, flags); > - > - } else { > - > - src = new IndexSegmentTupleCursor(seg, > tuple, fromKey, > - toKey); > - > - } > - > - } else if (this instanceof BTree) { > - > - if (isReadOnly()) { > - > - // Note: this iterator does not allow removal. > - src = new > ReadOnlyBTreeTupleCursor(((BTree) this), tuple, > - fromKey, toKey); > - > - } else { > - > - // Note: this iterator supports > traversal with concurrent > - // modification. > - src = new MutableBTreeTupleCursor(((BTree) this), > - new Tuple(this, flags), fromKey, toKey); > - > - } > - > - } else { > - > - throw new UnsupportedOperationException( > - "Unknown B+Tree implementation: " > - + this.getClass().getName()); > - > - } > - > - if ((flags & REVERSE) != 0) { > - > - /* > - * Reverse scan iterator. > - * > - * Note: The reverse scan MUST be layered > directly over the > - * ITupleCursor. Most critically, REMOVEALL > combined with a > - * REVERSE scan needs to process the tuples > in reverse index > - * order and then delete them as it goes. > - */ > - > - src = new Reverserator((ITupleCursor) src); > - > - } > - > - } > - > - if (filter != null) { > - > - /* > - * Apply the optional filter. > - * > - * Note: This needs to be after the reverse scan > and before > - * REMOVEALL (those are the assumptions for the flags). > - */ > - > - src = filter.newInstance(src); > - > - } > - > - if ((flags & REMOVEALL) != 0) { > - > - assertNotReadOnly(); > - > - /* > - * Note: This iterator removes each tuple that > it visits from the > - * source iterator. > - */ > - > - src = new TupleRemover() { > - @Override > - protected boolean remove(ITuple e) { > - // remove all visited tuples. > - return true; > - } > - }.filter(src); > - > - } > - > - return src; > - > - } > - > /** > * Copy all data, including deleted index entry markers > and timestamps iff > * supported by the source and target. The goal is an > exact copy of the data > @@ -3893,7 +3699,6 @@ > * {@link Reference} (a runtime security manager > exception will result). > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > * > * @param <T> > */ > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/BTree.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/BTree.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/BTree.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -32,9 +32,13 @@ > > import com.bigdata.BigdataStatics; > import > com.bigdata.btree.AbstractBTreeTupleCursor.MutableBTreeTupleCursor; > +import > com.bigdata.btree.AbstractBTreeTupleCursor.ReadOnlyBTreeTupleCursor; > import com.bigdata.btree.Leaf.ILeafListener; > import com.bigdata.btree.data.ILeafData; > import com.bigdata.btree.data.INodeData; > +import com.bigdata.btree.filter.IFilterConstructor; > +import com.bigdata.btree.filter.Reverserator; > +import com.bigdata.btree.filter.TupleRemover; > import com.bigdata.journal.AbstractJournal; > import com.bigdata.journal.ICommitter; > import com.bigdata.journal.IIndexManager; > @@ -42,6 +46,7 @@ > import com.bigdata.journal.Name2Addr.Entry; > import com.bigdata.mdi.IResourceMetadata; > import com.bigdata.rawstore.IRawStore; > +import org.apache.log4j.Logger; > > /** > * <p> > @@ -151,10 +156,14 @@ > * several published papers. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public class BTree extends AbstractBTree implements > ICommitter, ILocalBTreeView { > > + /** > + * Log for btree operations. > + */ > + private static final Logger log = Logger.getLogger(BTree.class); > + > final public int getHeight() { > > return height; > @@ -188,6 +197,7 @@ > > } > > + @Override > public final IResourceMetadata[] getResourceMetadata() { > //override to make final so sub-classes cannot > modify behavior. > return super.getResourceMetadata(); > @@ -204,23 +214,23 @@ > */ > public ICounter getCounter() { > > - ICounter counter = new Counter(this); > + ICounter tmpCounter = new Counter(this); > > final LocalPartitionMetadata pmd = > metadata.getPartitionMetadata(); > > if (pmd != null) { > > - counter = new > PartitionedCounter(pmd.getPartitionId(), counter); > + tmpCounter = new > PartitionedCounter(pmd.getPartitionId(), tmpCounter); > > } > > if (isReadOnly()) { > > - return new ReadOnlyCounter(counter); > + return new ReadOnlyCounter(tmpCounter); > > } > > - return counter; > + return tmpCounter; > > } > > @@ -393,7 +403,7 @@ > * <p> > * Note: The {@link #getCounter()} is NOT changed by this method. > */ > - final private void newRootLeaf() { > + private void newRootLeaf() { > > height = 0; > > @@ -1339,14 +1349,8 @@ > // view of this BTree. > newResources[1] = new > JournalMetadata((AbstractJournal) getStore(), > priorCommitTime); > + System.arraycopy(oldResources, 1, newResources, > 2, oldResources.length - 1); > > - // any other stores in the view are copied. > - for (int i = 1; i < oldResources.length; i++) { > - > - newResources[i + 1] = oldResources[i]; > - > - } > - > final LocalPartitionMetadata newPmd = new > LocalPartitionMetadata( > oldPmd.getPartitionId(), // partitionId > -1, // sourcePartitionId > @@ -1567,7 +1571,6 @@ > * @throws IllegalArgumentException > * if store is <code>null</code>. > */ > - @SuppressWarnings("unchecked") > public static BTree load(final IRawStore store, final > long addrCheckpoint, > final boolean readOnly) { > > @@ -1653,6 +1656,147 @@ > } > > /** > + * Core implementation. > + * <p> > + * Note: If {@link IRangeQuery#CURSOR} is specified the > returned iterator > + * supports traversal with concurrent modification by a > single-threaded > + * process (the {@link BTree} is NOT thread-safe for > writers). Write are > + * permitted iff {@link AbstractBTree} allows writes. > + * <p> > + * Note: {@link IRangeQuery#REVERSE} is handled here by > wrapping the > + * underlying {@link ITupleCursor}. > + * <p> > + * Note: {@link IRangeQuery#REMOVEALL} is handled here > by wrapping the > + * iterator. > + * <p> > + * Note: > + * {@link FusedView#rangeIterator(byte[], byte[], int, > int, IFilterConstructor)} > + * is also responsible for constructing an {@link > ITupleIterator} in a > + * manner similar to this method. If you are updating > the logic here, then > + * check the logic in that method as well! > + * > + * @todo add support to the iterator construct for > filtering by a tuple > + * revision timestamp range. > + */ > + public ITupleIterator rangeIterator(// > + final byte[] fromKey,// > + final byte[] toKey,// > + final int capacityIsIgnored,// > + final int flags,// > + final IFilterConstructor filter// > + ) { > + > + /* > + * Does the iterator declare that it will not write > back on the index? > + */ > + final boolean ro = ((flags & IRangeQuery.READONLY) != 0); > + > + if (ro && ((flags & IRangeQuery.REMOVEALL) != 0)) { > + > + throw new IllegalArgumentException(); > + > + } > + > + /* > + * Figure out what base iterator implementation to > use. We will layer > + * on the optional filter(s) below. > + */ > + ITupleIterator src; > + > + if (((flags & REVERSE) == 0) && > + ((flags & REMOVEALL) == 0) && > + ((flags & CURSOR) == 0)) { > + > + /* > + * Use the recursion-based striterator since it > is faster for a > + * BTree (but not for an IndexSegment). > + * > + * Note: The recursion-based striterator does > not support remove()! > + * > + * @todo we could pass in the Tuple here to make > the APIs a bit more > + * consistent across the recursion-based and the > cursor based > + * iterators. > + * > + * @todo when the capacity is one and REVERSE is > specified then we > + * can optimize this using a reverse traversal > striterator - this > + * will have lower overhead than the cursor for > the BTree (but not > + * for an IndexSegment). > + */ > + src = getRoot().rangeIterator(fromKey, toKey, flags); > + > + } else { > + > + final Tuple tuple = new Tuple(this, flags); > + > + if (isReadOnly()) { > + > + // Note: this iterator does not allow removal. > + src = new ReadOnlyBTreeTupleCursor(((BTree) > this), tuple, > + fromKey, toKey); > + > + } else { > + > + // Note: this iterator supports traversal > with concurrent > + // modification. > + src = new MutableBTreeTupleCursor(((BTree) this), > + new Tuple(this, flags), fromKey, toKey); > + > + } > + > + if ((flags & REVERSE) != 0) { > + > + /* > + * Reverse scan iterator. > + * > + * Note: The reverse scan MUST be layered > directly over the > + * ITupleCursor. Most critically, REMOVEALL > combined with a > + * REVERSE scan needs to process the tuples > in reverse index > + * order and then delete them as it goes. > + */ > + > + src = new Reverserator((ITupleCursor) src); > + > + } > + > + } > + > + if (filter != null) { > + > + /* > + * Apply the optional filter. > + * > + * Note: This needs to be after the reverse scan > and before > + * REMOVEALL (those are the assumptions for the flags). > + */ > + > + src = filter.newInstance(src); > + > + } > + > + if ((flags & REMOVEALL) != 0) { > + > + assertNotReadOnly(); > + > + /* > + * Note: This iterator removes each tuple that > it visits from the > + * source iterator. > + */ > + > + src = new TupleRemover() { > + @Override > + protected boolean remove(ITuple e) { > + // remove all visited tuples. > + return true; > + } > + }.filter(src); > + > + } > + > + return src; > + > + } > + > + /** > * Factory for mutable nodes and leaves used by the > {@link NodeSerializer}. > */ > protected static class NodeFactory implements INodeFactory { > @@ -1682,7 +1826,6 @@ > * Mutable counter. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public static class Counter implements ICounter { > > @@ -1742,7 +1885,6 @@ > * int32 word. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public static class PartitionedCounter implements ICounter { > > @@ -1851,7 +1993,6 @@ > * > * @author <a > href="mailto:tho...@us...">Bryan > * Thompson</a> > - * @version $Id$ > */ > protected static class Stack { > > @@ -2027,7 +2168,6 @@ > * Note: The {@link MutableBTreeTupleCursor} does > register such listeners. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public class LeafCursor implements ILeafCursor<Leaf> { > > @@ -2125,6 +2265,7 @@ > > } > > + @Override > public LeafCursor clone() { > > return new LeafCursor(this); > @@ -2177,7 +2318,7 @@ > > } > > - public Leaf first() { > + final public Leaf first() { > > stack.clear(); > > @@ -2198,7 +2339,7 @@ > > } > > - public Leaf last() { > + final public Leaf last() { > > stack.clear(); > > @@ -2224,7 +2365,7 @@ > * the leaf may not actually contain the key, in > which case it is the > * leaf that contains the insertion point for the key. > */ > - public Leaf seek(final byte[] key) { > + final public Leaf seek(final byte[] key) { > > stack.clear(); > > > Deleted: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/DelegateIndex.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/DelegateIndex.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/DelegateIndex.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -1,169 +0,0 @@ > -/* > - > -Copyright (C) SYSTAP, LLC 2006-2008. All rights reserved. > - > -Contact: > - SYSTAP, LLC > - 4501 Tower Road > - Greensboro, NC 27410 > - lic...@bi... > - > -This program is free software; you can redistribute it and/or modify > -it under the terms of the GNU General Public License as published by > -the Free Software Foundation; version 2 of the License. > - > -This program is distributed in the hope that it will be useful, > -but WITHOUT ANY WARRANTY; without even the implied warranty of > -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > -GNU General Public License for more details. > - > -You should have received a copy of the GNU General Public License > -along with this program; if not, write to the Free Software > -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA > 02111-1307 USA > - > -*/ > -/* > - * Created on Feb 20, 2008 > - */ > - > -package com.bigdata.btree; > - > -import com.bigdata.btree.filter.IFilterConstructor; > -import > com.bigdata.btree.proc.AbstractKeyArrayIndexProcedureConstructor; > -import com.bigdata.btree.proc.IKeyRangeIndexProcedure; > -import com.bigdata.btree.proc.IResultHandler; > -import com.bigdata.btree.proc.ISimpleIndexProcedure; > -import com.bigdata.counters.ICounterSet; > -import com.bigdata.mdi.IResourceMetadata; > - > -/** > - * An object that delegates its {@link IIndex} interface. > - * > - * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > - */ > -public class DelegateIndex implements IIndex { > - > - private final IIndex delegate; > - > - /** > - * @param delegate > - * The delegate. > - */ > - public DelegateIndex(IIndex delegate) { > - > - if (delegate == null) { > - > - throw new IllegalArgumentException(); > - > - } > - > - this.delegate = delegate; > - > - } > - > - public String toString() { > - > - final StringBuilder sb = new StringBuilder(); > - > - sb.append(getClass().getSimpleName()); > - > - sb.append("{ "); > - > - sb.append(delegate.toString()); > - > - sb.append("}"); > - > - return sb.toString(); > - > - } > - > - public boolean contains(byte[] key) { > - return delegate.contains(key); > - } > - > - public ICounter getCounter() { > - return delegate.getCounter(); > - } > - > - public IndexMetadata getIndexMetadata() { > - return delegate.getIndexMetadata(); > - } > - > - public IResourceMetadata[] getResourceMetadata() { > - return delegate.getResourceMetadata(); > - } > - > - public ICounterSet getCounters() { > - return delegate.getCounters(); > - } > - > - public byte[] insert(byte[] key, byte[] value) { > - return delegate.insert(key, value); > - } > - > - public byte[] lookup(byte[] key) { > - return delegate.lookup(key); > - } > - > - public long rangeCount() { > - return delegate.rangeCount(); > - } > - > - public long rangeCount(byte[] fromKey, byte[] toKey) { > - return delegate.rangeCount(fromKey, toKey); > - } > - > - public long rangeCountExact(byte[] fromKey, byte[] toKey) { > - return delegate.rangeCountExact(fromKey, toKey); > - } > - > - public long rangeCountExactWithDeleted(byte[] fromKey, > byte[] toKey) { > - return delegate.rangeCountExactWithDeleted(fromKey, toKey); > - } > - > - public ITupleIterator rangeIterator() { > - return rangeIterator(null,null); > - } > - > - public ITupleIterator rangeIterator(byte[] fromKey, > byte[] toKey, int capacity, int flags, IFilterConstructor filter) { > - return delegate.rangeIterator(fromKey, toKey, > capacity, flags, filter); > - } > - > - public ITupleIterator rangeIterator(byte[] fromKey, > byte[] toKey) { > - return delegate.rangeIterator(fromKey, toKey); > - } > - > - public byte[] remove(byte[] key) { > - return delegate.remove(key); > - } > - > - public Object submit(byte[] key, ISimpleIndexProcedure proc) { > - return delegate.submit(key, proc); > - } > - > - public void submit(byte[] fromKey, byte[] toKey, > IKeyRangeIndexProcedure proc, IResultHandler handler) { > - delegate.submit(fromKey, toKey, proc, handler); > - } > - > - public void submit(int fromIndex, int toIndex, byte[][] > keys, byte[][] vals, > AbstractKeyArrayIndexProcedureConstructor ctor, > IResultHandler handler) { > - delegate.submit(fromIndex, toIndex, keys, vals, > ctor, handler); > - } > - > - public boolean contains(Object key) { > - return delegate.contains(key); > - } > - > - public Object insert(Object key, Object value) { > - return delegate.insert(key, value); > - } > - > - public Object lookup(Object key) { > - return delegate.lookup(key); > - } > - > - public Object remove(Object key) { > - return delegate.remove(key); > - } > - > -} > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/IndexSegment.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/IndexSegment.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/IndexSegment.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -29,15 +29,20 @@ > import > com.bigdata.btree.IndexSegment.ImmutableNodeFactory.ImmutableLeaf; > import com.bigdata.btree.data.ILeafData; > import com.bigdata.btree.data.INodeData; > +import com.bigdata.btree.filter.IFilterConstructor; > +import com.bigdata.btree.filter.Reverserator; > +import com.bigdata.btree.filter.TupleRemover; > import com.bigdata.btree.raba.IRaba; > import com.bigdata.btree.raba.ReadOnlyKeysRaba; > import com.bigdata.btree.raba.ReadOnlyValuesRaba; > import com.bigdata.io.AbstractFixedByteArrayBuffer; > +import com.bigdata.io.DirectBufferPool; > import com.bigdata.io.FixedByteArrayBuffer; > import com.bigdata.mdi.IResourceMetadata; > import com.bigdata.service.Event; > import com.bigdata.service.EventResource; > import com.bigdata.service.EventType; > +import org.apache.log4j.Logger; > > /** > * An index segment is read-only btree corresponding to some > key range of a > @@ -51,11 +56,15 @@ > * leaves will all refuse mutation operations). > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public class IndexSegment extends AbstractBTree { > > /** > + * Log for btree opeations. > + */ > + private static final Logger log = > Logger.getLogger(IndexSegment.class); > + > + /** > * Type safe reference to the backing store. > */ > private final IndexSegmentStore fileStore; > @@ -160,6 +169,7 @@ > * {@link #getEntryCount()} uses the {@link > IndexSegmentCheckpoint} and that > * is only available while the {@link IndexSegmentStore} is open. > */ > + @Override > public String toString() { > > // make sure the fileStore will remain open. > @@ -303,7 +313,7 @@ > } > > @Override > - protected void _reopen() { > + final protected void _reopen() { > > // prevent concurrent close. > fileStore.lock.lock(); > @@ -684,6 +694,138 @@ > }; > } > > + /** > + * Core implementation. > + * <p> > + * Note: If {@link IRangeQuery#CURSOR} is specified the > returned iterator > + * supports traversal with concurrent modification by a > single-threaded > + * process (the {@link BTree} is NOT thread-safe for > writers). Write are > + * permitted iff {@link AbstractBTree} allows writes. > + * <p> > + * Note: {@link IRangeQuery#REVERSE} is handled here by > wrapping the > + * underlying {@link ITupleCursor}. > + * <p> > + * Note: {@link IRangeQuery#REMOVEALL} is handled here > by wrapping the > + * iterator. > + * <p> > + * Note: > + * {@link FusedView#rangeIterator(byte[], byte[], int, > int, IFilterConstructor)} > + * is also responsible for constructing an {@link > ITupleIterator} in a > + * manner similar to this method. If you are updating > the logic here, then > + * check the logic in that method as well! > + * > + * @todo add support to the iterator construct for > filtering by a tuple > + * revision timestamp range. > + */ > + public ITupleIterator rangeIterator(// > + final byte[] fromKey,// > + final byte[] toKey,// > + final int capacityIsIgnored,// > + final int flags,// > + final IFilterConstructor filter// > + ) { > + > + /* > + * Does the iterator declare that it will not write > back on the index? > + */ > + final boolean readOnly = ((flags & > IRangeQuery.READONLY) != 0); > + > + if (readOnly && ((flags & IRangeQuery.REMOVEALL) != 0)) { > + > + throw new IllegalArgumentException(); > + > + } > + > + /* > + * Figure out what base iterator implementation to > use. We will layer > + * on the optional filter(s) below. > + */ > + ITupleIterator src; > + > + final Tuple tuple = new Tuple(this, flags); > + > + final IndexSegment seg = (IndexSegment) this; > + > + /* > + * @todo we could scan the list of pools and > chose the best fit > + * pool and then allocate a buffer from that > pool. Best fit > + * would mean either the byte range fits without > "too much" slop > + * or the #of reads will have to perform is not > too large. We > + * might also want to limit the maximum size of > the reads. > + */ > + > +// final DirectBufferPool pool = > DirectBufferPool.INSTANCE_10M; > + final DirectBufferPool pool = DirectBufferPool.INSTANCE; > + > + if (true > + && ((flags & REVERSE) == 0) > + && ((flags & CURSOR) == 0) > + && > (seg.getStore().getCheckpoint().maxNodeOrLeafLength <= pool > + .getBufferCapacity()) > + && ((rangeCount(fromKey, toKey) / > branchingFactor) > 2)) { > + > + src = new IndexSegmentMultiBlockIterator(seg, pool, > + fromKey, toKey, flags); > + > + } else { > + > + src = new IndexSegmentTupleCursor(seg, > tuple, fromKey, toKey); > + > + } > + > + > + if ((flags & REVERSE) != 0) { > + > + /* > + * Reverse scan iterator. > + * > + * Note: The reverse scan MUST be layered > directly over the > + * ITupleCursor. Most critically, REMOVEALL > combined with a > + * REVERSE scan needs to process the tuples > in reverse index > + * order and then delete them as it goes. > + */ > + > + src = new Reverserator((ITupleCursor) src); > + > + } > + > + > + if (filter != null) { > + > + /* > + * Apply the optional filter. > + * > + * Note: This needs to be after the reverse scan > and before > + * REMOVEALL (those are the assumptions for the flags). > + */ > + > + src = filter.newInstance(src); > + > + } > + > + if ((flags & REMOVEALL) != 0) { > + > + assertNotReadOnly(); > + > + /* > + * Note: This iterator removes each tuple that > it visits from the > + * source iterator. > + */ > + > + src = new TupleRemover() { > + @Override > + protected boolean remove(ITuple e) { > + // remove all visited tuples. > + return true; > + } > + }.filter(src); > + > + } > + > + return src; > + > + } > + > /* > * INodeFactory > */ > @@ -719,7 +861,6 @@ > * > * @author <a > href="mailto:tho...@us...">Bryan > * Thompson</a> > - * @version $Id$ > */ > public static class ImmutableNode extends Node { > > @@ -768,8 +909,6 @@ > * > * @author <a > href="mailto:tho...@us...">Bryan > * Thompson</a> > - * @version $Id: IndexSegment.java 2265 2009-10-26 > 12:51:06Z thompsonbry > - * $ > */ > private static class EmptyReadOnlyLeafData > implements ILeafData { > > @@ -869,7 +1008,6 @@ > * > * @author <a > href="mailto:tho...@us...">Bryan > * Thompson</a> > - * @version $Id$ > */ > public static class ImmutableLeaf extends Leaf { > > @@ -1007,7 +1145,6 @@ > // * > // * @author <a > href="mailto:tho...@us...">Bryan > // * Thompson</a> > -// * @version $Id$ > // */ > // static public class ImmutableEmptyLastLeaf extends > ImmutableLeaf { > // > @@ -1046,7 +1183,6 @@ > * A position for the {@link IndexSegmentTupleCursor}. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > * @param <E> > * The generic type for objects de-serialized > from the values in > * the index. > @@ -1054,6 +1190,7 @@ > static private class CursorPosition<E> extends > AbstractCursorPosition<ImmutableLeaf,E> { > > @SuppressWarnings("unchecked") > + @Override > public IndexSegmentTupleCursor<E> getCursor() { > > return (IndexSegmentTupleCursor)cursor; > @@ -1102,7 +1239,6 @@ > * listeners for concurrent modifications. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > * @param <E> > * The generic type for the objects > de-serialized from the index. > */ > @@ -1265,7 +1401,6 @@ > * Cursor using the double-linked leaves for efficient scans. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public class ImmutableLeafCursor implements > ILeafCursor<ImmutableLeaf> { > > @@ -1283,6 +1418,7 @@ > > } > > + @Override > public ImmutableLeafCursor clone() { > > return new ImmutableLeafCursor(this); > @@ -1333,7 +1469,7 @@ > > } > > - public ImmutableLeaf seek(final byte[] key) { > + final public ImmutableLeaf seek(final byte[] key) { > > leaf = findLeaf(key); > > @@ -1363,7 +1499,7 @@ > > } > > - public ImmutableLeaf first() { > + final public ImmutableLeaf first() { > > final long addr = > getStore().getCheckpoint().addrFirstLeaf; > > @@ -1373,7 +1509,7 @@ > > } > > - public ImmutableLeaf last() { > + final public ImmutableLeaf last() { > > final long addr = > getStore().getCheckpoint().addrLastLeaf; > > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndex.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndex.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndex.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -29,13 +29,13 @@ > import java.io.ObjectOutput; > import java.util.UUID; > > -import com.bigdata.mdi.PartitionLocator; > import org.CognitiveWeb.extser.LongPacker; > > import com.bigdata.btree.keys.IKeyBuilderFactory; > import com.bigdata.btree.view.FusedView; > import com.bigdata.journal.ICommitter; > import com.bigdata.journal.IResourceManager; > +import com.bigdata.mdi.PartitionLocator; > import com.bigdata.rawstore.IRawStore; > import com.bigdata.service.MetadataService; > > @@ -44,7 +44,7 @@ > * metadata index for each distributed index. The keys of > the metadata index are > * the first key that would be directed into the > corresponding index segment, > * e.g., a <em>separator key</em> (this is just the standard > btree semantics). > - * The values are serialized {@link > com.bigdata.mdi.PartitionLocator} objects. > + * The values are serialized {@link PartitionLocator} objects. > * <p> > * Note: At this time the recommended scale-out approach for > the metadata index > * is to place the metadata indices on a {@link > MetadataService} (the same > @@ -63,7 +63,6 @@ > * taking the database offline. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > * > * @todo The {@link MetadataIndex} does NOT support either > overflow (it may NOT > * be a {@link FusedView}) NOR key-range splits. There > are several issues > @@ -97,6 +96,7 @@ > */ > private transient final MetadataIndexView view; > > + @Override > public MetadataIndexMetadata getIndexMetadata() { > > return (MetadataIndexMetadata) super.getIndexMetadata(); > @@ -210,6 +210,7 @@ > * Extended to require a checkpoint if {@link > #incrementAndGetNextPartitionId()} has been > * invoked. > */ > + @Override > public boolean needsCheckpoint() { > > if(nextPartitionId != > ((MetadataIndexCheckpoint)getCheckpoint()).getNextPartitionId()) { > @@ -227,7 +228,6 @@ > * identifier to be assigned by the metadata index. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public static class MetadataIndexCheckpoint extends Checkpoint { > > @@ -329,7 +329,6 @@ > * for the managed scale-out index. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public static class MetadataIndexMetadata extends > IndexMetadata implements Externalizable { > > @@ -381,6 +380,7 @@ > > private static final transient int VERSION0 = 0x0; > > + @Override > public void readExternal(ObjectInput in) throws IOException, > ClassNotFoundException { > > @@ -398,6 +398,7 @@ > > } > > + @Override > public void writeExternal(ObjectOutput out) throws > IOException { > > super.writeExternal(out); > @@ -430,7 +431,6 @@ > * {@link MetadataIndex}. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public static class PartitionLocatorTupleSerializer extends > DefaultTupleSerializer<byte[]/*key*/, > PartitionLocator/*val*/> { > @@ -473,6 +473,7 @@ > */ > private final static transient byte VERSION = VERSION0; > > + @Override > public void readExternal(final ObjectInput in) > throws IOException, > ClassNotFoundException { > > @@ -490,6 +491,7 @@ > > } > > + @Override > public void writeExternal(final ObjectOutput out) > throws IOException { > > super.writeExternal(out); > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndexView.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndexView.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/MetadataIndexView.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -28,27 +28,27 @@ > > package com.bigdata.btree; > > -import com.bigdata.mdi.PartitionLocator; > +import com.bigdata.btree.MetadataIndex.MetadataIndexMetadata; > +import com.bigdata.btree.filter.IFilterConstructor; > import org.apache.log4j.Logger; > > import com.bigdata.cache.LRUCache; > -import com.bigdata.btree.MetadataIndex.MetadataIndexMetadata; > +import com.bigdata.mdi.PartitionLocator; > > /** > * The extension semantics for the {@link IMetadataIndex} > are implemented by > * this class. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > -public class MetadataIndexView extends DelegateIndex > implements IMetadataIndex { > +public class MetadataIndexView implements IMetadataIndex { > > protected static final Logger log = > Logger.getLogger(MetadataIndexView.class); > > // protected static final boolean INFO = log.isInfoEnabled(); > // protected static final boolean DEBUG = log.isDebugEnabled(); > > - private final AbstractBTree delegate; > + private final MetadataIndex metadataIndex; > > /** > * <code>true</code> iff this is a read-only view. this > is used to > @@ -58,19 +58,14 @@ > */ > final private boolean readOnly; > > - public MetadataIndexView(AbstractBTree delegate) { > - > - super(delegate); > - > - this.delegate = delegate; > - > - this.readOnly = delegate.isReadOnly(); > - > + public MetadataIndexView(MetadataIndex metadataIndex) { > + this.metadataIndex = metadataIndex; > + this.readOnly = metadataIndex.isReadOnly(); > } > > public MetadataIndexMetadata getIndexMetadata() { > > - return (MetadataIndexMetadata) super.getIndexMetadata(); > + return (MetadataIndexMetadata) > metadataIndex.getIndexMetadata(); > > } > > @@ -87,7 +82,7 @@ > * de-serialization using the ITupleSerializer. > */ > > - return (PartitionLocator) delegate.lookup((Object) key); > + return (PartitionLocator) metadataIndex.lookup((Object) key); > > } > > @@ -112,7 +107,7 @@ > */ > private PartitionLocator find_with_iterator(byte[] key) { > > - final ITupleIterator<PartitionLocator> itr = > delegate.rangeIterator( > + final ITupleIterator<PartitionLocator> itr = > metadataIndex.rangeIterator( > null/* fromKey */, key/* toKey */, 1/* capacity */, > IRangeQuery.VALS | IRangeQuery.REVERSE, > null/* filter */); > > @@ -147,7 +142,7 @@ > if (key == null) { > > // use the index of the last partition. > - index = delegate.getEntryCount() - 1; > + index = metadataIndex.getEntryCount() - 1; > > } else { > > @@ -212,7 +207,7 @@ > > /** > * Remove the locator from the {@link #locatorCache}. It > will be re-read on > - * demand from the {@link #delegate}. > + * demand from the {@link #metadataIndex}. > */ > public void staleLocator(PartitionLocator locator) { > > @@ -230,8 +225,8 @@ > */ > private PartitionLocator getLocatorAtIndex(int index) { > > - final ITuple<PartitionLocator> tuple = > delegate.valueAt(index, > - delegate.getLookupTuple()); > + final ITuple<PartitionLocator> tuple = > metadataIndex.valueAt(index, > + metadataIndex.getLookupTuple()); > > return tuple.getObject(); > > @@ -252,7 +247,7 @@ > */ > private int findIndexOf(byte[] key) { > > - int pos = delegate.indexOf(key); > + int pos = metadataIndex.indexOf(key); > > if (pos < 0) { > > @@ -266,7 +261,7 @@ > > if(pos == 0) { > > - if(delegate.getEntryCount() != 0) { > + if(metadataIndex.getEntryCount() != 0) { > > throw new IllegalStateException( > "Partition not defined for empty key."); > @@ -294,4 +289,34 @@ > > } > > + public long rangeCount() { > + return metadataIndex.rangeCount(); > + } > + > + public long rangeCount(byte[] fromKey, byte[] toKey) { > + return metadataIndex.rangeCount(fromKey, toKey); > + } > + > + public long rangeCountExact(byte[] fromKey, byte[] toKey) { > + return metadataIndex.rangeCountExact(fromKey, toKey); > + } > + > + public long rangeCountExactWithDeleted(byte[] fromKey, > byte[] toKey) { > + return > metadataIndex.rangeCountExactWithDeleted(fromKey, toKey); > + } > + > + public ITupleIterator rangeIterator() { > + return metadataIndex.rangeIterator(); > + } > + > + public ITupleIterator rangeIterator(byte[] fromKey, > byte[] toKey) { > + return metadataIndex.rangeIterator(fromKey, toKey); > + } > + > + public ITupleIterator rangeIterator(byte[] fromKey, byte[] toKey, > + int capacity, int flags, > + IFilterConstructor > filterCtor) { > + return metadataIndex.rangeIterator(fromKey, toKey, > capacity, flags, filterCtor); > + } > + > } > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/Node.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/Node.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/Node.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -56,6 +56,7 @@ > import cutthecrap.utils.striterators.Expander; > import cutthecrap.utils.striterators.SingleValueIterator; > import cutthecrap.utils.striterators.Striterator; > +import org.apache.log4j.Logger; > > /** > * <p> > @@ -82,11 +83,15 @@ > * we can prune the search before we materialize the child. > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > public class Node extends AbstractNode<Node> implements INodeData { > > /** > + * Log for btree opeations. > + */ > + private static final Logger log = Logger.getLogger(Node.class); > + > + /** > * The data record. {@link MutableNodeData} is used for > all mutation > * operations. {@link ReadOnlyNodeData} is used when the > {@link Node} is > * made persistent. A read-only data record is > automatically converted into > @@ -623,7 +628,7 @@ > > btree.getBtreeCounters().rootsSplit++; > > - if (BTree.log.isInfoEnabled() || BigdataStatics.debug) { > + if (log.isInfoEnabled() || BigdataStatics.debug) { > > // Note: nnodes and nleaves might not reflect > rightSibling yet. > > @@ -632,8 +637,8 @@ > + ", m=" + btree.getBranchingFactor() + > ", nentries=" > + btree.nentries; > > - if (BTree.log.isInfoEnabled()) > - BTree.log.info(msg); > + if (log.isInfoEnabled()) > + log.info(msg); > > if (BigdataStatics.debug) > System.err.println(msg); > @@ -2400,8 +2405,8 @@ > // one less node in the tree. > btree.nnodes--; > > - if (BTree.INFO) { > - BTree.log.info("reduced tree height: height=" > + if (INFO) { > + log.info("reduced tree height: height=" > + btree.height + ", newRoot=" + > btree.root); > } > > > Modified: > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/ReadOnlyIndex.java > =================================================================== > --- > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/ReadOnlyIndex.java 2010-10-08 19:57:43 UTC (rev 3760) > +++ > branches/maven_scaleout/bigdata-core/src/main/java/com/bigdata > /btree/ReadOnlyIndex.java 2010-10-08 22:21:56 UTC (rev 3761) > @@ -27,6 +27,9 @@ > > package com.bigdata.btree; > > +import com.bigdata.btree.proc.IKeyRangeIndexProcedure; > +import com.bigdata.btree.proc.ISimpleIndexProcedure; > +import com.bigdata.counters.ICounterSet; > import java.util.Iterator; > > import com.bigdata.btree.filter.IFilterConstructor; > @@ -47,20 +50,19 @@ > * @see {@link IResourceManager#getIndex(String, long)} > * > * @author <a > href="mailto:tho...@us...">Bryan Thompson</a> > - * @version $Id$ > */ > -public class ReadOnlyIndex extends DelegateIndex { > +public class ReadOnlyIndex implements IIndex { > + > + private IIndex src; > > public ReadOnlyIndex(IIndex src) { > - > - super(src); > - > + this.src = src; > } > > /** {@link IndexMetadata} is cloned to disallow modification. */ > final public IndexMetadata getIndexMetadata() { > > - return super.getIndexMetadata().clone(); > + return src.getIndexMetadata().clone(); > > } > > @@ -71,7 +73,7 @@ > */ > final public IResourceMetadata[] getResourceMetadata() { > > - return super.getResourceMetadata().clone(); > + return src.getResourceMetadata().clone(); > > } > > @@ -80,7 +82,7 @@ > */ > final public ICounter getCounter() { > > - return new ReadOnlyCounter(super.getCounter()); > + return new ReadOnlyCounter(src.getCounter()); > > } > > @@ -121,7 +123,7 @@ > /* > * Must explicitly disable Iterator#remove(). > */ > - return new > ReadOnlyEntryIterator(super.rangeIterator(fromKey, toKey, > + return new > ReadOnlyEntryIterator(src.rangeIterator(fromKey, toKey, > capacity, flags, filter)); > > } > @@ -173,4 +175,65 @@ > > } > > + public ICounterSet getCounters() { > + return src.getCounters(); > + } > + > + public Object submit(byte[] key, ISimpleIndexProcedure proc) { > + return src.submit(key, proc); > + } > + > + public void submit(byte[] fromKey, byte[] toKey, > + IKeyRangeIndexProcedure proc,... [truncated message content] |
|
From: husdon <no...@no...> - 2010-10-06 15:04:19
|
See <http://localhost/job/BigData/162/> |
|
From: Bryan T. <br...@sy...> - 2010-10-06 13:56:56
|
All,
There were perhaps 50+ instances of QuorumPeerMain running on the hudson CI machine. Is anyone aware of a change to the ant junit runner which might account for a failure to tear down zookeeper? The machine is running CI for the trunk, the quads query branch, and the journal HA branch, so a change in any of those branches might have given rise to this problem. This change could go back as much as a month as hudson had been terminated/died on that machine and I only restarted it a few days ago. Those 50+ QuorumPeerMain instances were all created in the last few days.
Thanks,
Bryan
________________________________________
From: husdon [no...@no...]
Sent: Wednesday, October 06, 2010 9:39 AM
To: big...@li...; tho...@us...
Subject: Build failed in Hudson: BigData #161
See <http://localhost/job/BigData/161/changes>
Changes:
[thompsonbry] Applying the as found serialialVersionUID to BaseVocabulary. The class did not have an explicit serialVersionId. This change should be backward compatible with existing stores created from the trunk.
------------------------------------------
[...truncated 239 lines...]
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242240 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242241 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242543 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60003 to sun.nio.ch.SelectionKeyImpl@7240c519
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242543 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242543 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242721 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60004 to sun.nio.ch.SelectionKeyImpl@1a07cf46
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242721 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242721 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242944 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60001 to sun.nio.ch.SelectionKeyImpl@7c2d75b
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242944 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242945 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243439 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81afe2e40002 to sun.nio.ch.SelectionKeyImpl@28e40274
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2243439 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243439 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243481 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60002 to sun.nio.ch.SelectionKeyImpl@115d22a5
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2243481 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243482 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] Processing <http://localhost/job/BigData/ws/trunk/ant-build/classes/test/test-results/TESTS-TestSuites.xml> to /tmp/null1907800227
[junitreport] Loading stylesheet jar:file:/usr/local/apache-ant-1.8.1/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
[junitreport] WARN : 2244318 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60004 to sun.nio.ch.SelectionKeyImpl@26b85530
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244318 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244318 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244380 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60001 to sun.nio.ch.SelectionKeyImpl@368ad2e7
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244380 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244381 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244644 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60003 to sun.nio.ch.SelectionKeyImpl@3378613b
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244645 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244645 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244754 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81afe2e40002 to sun.nio.ch.SelectionKeyImpl@3ec9b04d
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244754 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244755 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] Transform time: 1288ms
[junitreport] Deleting: /tmp/null1907800227
stopLookup:
[echo] java -Dapp.home=<http://localhost/job/BigData/ws/trunk> -Djini.lib=<http://localhost/job/BigData/ws/trunk/dist/bigdata/lib> -Djini.lib.dl=<http://localhost/job/BigData/ws/trunk/dist/bigdata/lib-dl> -Djava.security.policy=<http://localhost/job/BigData/ws/trunk/dist/bigdata/var/config/policy/policy.all> -Dlog4j.configuration=resources/logging/log4j.properties -Djava.net.preferIPv4Stack=true -Dbigdata.fedname=bigdata.test.group-dutl-56.us.msudev.noklab.net -Ddefault.nic=${default.nic} -jar <http://localhost/job/BigData/ws/trunk/bigdata-test/lib/lookupstarter.jar> -stop
[echo]
BUILD FAILED
<http://localhost/job/BigData/ws/trunk/build.xml>:1609: The following error occurred while executing this line:
<http://localhost/job/BigData/ws/trunk/build.xml>:1672: java.io.IOException: Cannot run program "/usr/java/jdk1.6.0_17/jre/bin/java": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at java.lang.Runtime.exec(Runtime.java:593)
at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:827)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:445)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:459)
at org.apache.tools.ant.taskdefs.Java.fork(Java.java:791)
at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:214)
at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:135)
at org.apache.tools.ant.taskdefs.Java.execute(Java.java:108)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1249)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397)
at org.apache.tools.ant.Project.executeTarget(Project.java:1366)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1249)
at org.apache.tools.ant.Main.runBuild(Main.java:801)
at org.apache.tools.ant.Main.startAnt(Main.java:218)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
... 37 more
Total time: 37 minutes 48 seconds
Shutting down: Wed Oct 06 09:39:04 EDT 2010
WARN : 2244974 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60002 to sun.nio.ch.SelectionKeyImpl@7d11ef2f
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
WARN : 2244975 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
WARN : 2244975 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
Publishing Javadoc
Archiving artifacts
Recording test results
Performing Post build task...
Could not match :JUNIT RUN COMPLETE : False
Logical operation result is FALSE
Skipping script : /root/rsync.sh
END OF POST BUILD TASK : 0
|
|
From: husdon <no...@no...> - 2010-10-06 13:40:20
|
See <http://localhost/job/BigData/161/changes>
Changes:
[thompsonbry] Applying the as found serialialVersionUID to BaseVocabulary. The class did not have an explicit serialVersionId. This change should be backward compatible with existing stores created from the trunk.
------------------------------------------
[...truncated 239 lines...]
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242240 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242241 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242543 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60003 to sun.nio.ch.SelectionKeyImpl@7240c519
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242543 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242543 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242721 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60004 to sun.nio.ch.SelectionKeyImpl@1a07cf46
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242721 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242721 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242944 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60001 to sun.nio.ch.SelectionKeyImpl@7c2d75b
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2242944 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2242945 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243439 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81afe2e40002 to sun.nio.ch.SelectionKeyImpl@28e40274
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2243439 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243439 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243481 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60002 to sun.nio.ch.SelectionKeyImpl@115d22a5
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2243481 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2243482 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] Processing <http://localhost/job/BigData/ws/trunk/ant-build/classes/test/test-results/TESTS-TestSuites.xml> to /tmp/null1907800227
[junitreport] Loading stylesheet jar:file:/usr/local/apache-ant-1.8.1/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
[junitreport] WARN : 2244318 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60004 to sun.nio.ch.SelectionKeyImpl@26b85530
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244318 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244318 pool-1-thread-772-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244380 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60001 to sun.nio.ch.SelectionKeyImpl@368ad2e7
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244380 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244381 pool-1-thread-770-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244644 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60003 to sun.nio.ch.SelectionKeyImpl@3378613b
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244645 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244645 pool-1-thread-771-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244754 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81afe2e40002 to sun.nio.ch.SelectionKeyImpl@3ec9b04d
[junitreport] java.net.ConnectException: Connection refused
[junitreport] at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[junitreport] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
[junitreport] WARN : 2244754 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] WARN : 2244755 main-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
[junitreport] java.nio.channels.ClosedChannelException
[junitreport] at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
[junitreport] at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
[junitreport] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
[junitreport] Transform time: 1288ms
[junitreport] Deleting: /tmp/null1907800227
stopLookup:
[echo] java -Dapp.home=<http://localhost/job/BigData/ws/trunk> -Djini.lib=<http://localhost/job/BigData/ws/trunk/dist/bigdata/lib> -Djini.lib.dl=<http://localhost/job/BigData/ws/trunk/dist/bigdata/lib-dl> -Djava.security.policy=<http://localhost/job/BigData/ws/trunk/dist/bigdata/var/config/policy/policy.all> -Dlog4j.configuration=resources/logging/log4j.properties -Djava.net.preferIPv4Stack=true -Dbigdata.fedname=bigdata.test.group-dutl-56.us.msudev.noklab.net -Ddefault.nic=${default.nic} -jar <http://localhost/job/BigData/ws/trunk/bigdata-test/lib/lookupstarter.jar> -stop
[echo]
BUILD FAILED
<http://localhost/job/BigData/ws/trunk/build.xml>:1609: The following error occurred while executing this line:
<http://localhost/job/BigData/ws/trunk/build.xml>:1672: java.io.IOException: Cannot run program "/usr/java/jdk1.6.0_17/jre/bin/java": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at java.lang.Runtime.exec(Runtime.java:593)
at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:827)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:445)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:459)
at org.apache.tools.ant.taskdefs.Java.fork(Java.java:791)
at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:214)
at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:135)
at org.apache.tools.ant.taskdefs.Java.execute(Java.java:108)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1249)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397)
at org.apache.tools.ant.Project.executeTarget(Project.java:1366)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1249)
at org.apache.tools.ant.Main.runBuild(Main.java:801)
at org.apache.tools.ant.Main.startAnt(Main.java:218)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
... 37 more
Total time: 37 minutes 48 seconds
Shutting down: Wed Oct 06 09:39:04 EDT 2010
WARN : 2244974 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:967): Exception closing session 0x12b81b15fe60002 to sun.nio.ch.SelectionKeyImpl@7d11ef2f
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:933)
WARN : 2244975 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1001): Ignoring exception during shutdown input
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:999)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
WARN : 2244975 pool-1-thread-769-SendThread org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1006): Ignoring exception during shutdown output
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649)
at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1004)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:970)
Publishing Javadoc
Archiving artifacts
Recording test results
Performing Post build task...
Could not match :JUNIT RUN COMPLETE : False
Logical operation result is FALSE
Skipping script : /root/rsync.sh
END OF POST BUILD TASK : 0
|
|
From: husdon <no...@no...> - 2010-10-04 16:52:12
|
See <http://localhost/job/BigData/changes> |
|
From: Bryan T. <br...@sy...> - 2010-09-28 12:22:40
|
All, I have merged from the trunk to the JOURNAL HA branch. This picked up 28 changes, no conflicts. I am going to run the CI tests a few times on this to look for test failures/regressions before merging to the trunk and closing out this branch. My local SVN merge notes includes a comment that: Note: The edits by BrianM to fix the test data URIs are at least partially missing in the HA branch. It looks like these changes should have been introduced in r2599, which is before just the start of the HA branch, and possibly r3305. I would appreciate it if someone could spot check this and see whether or not it is true. Thanks, Bryan |
|
From: husdon <no...@no...> - 2010-09-28 00:37:41
|
See <http://localhost/job/BigData/changes> |
|
From: Bryan T. <br...@sy...> - 2010-09-27 23:19:47
|
Well, it turns out that the CI build machine was not running hudson. I've adjusted the runstate for the hudson start and added the quads query branch to those which it is building. Bryan |
|
From: Bryan T. <br...@sy...> - 2010-09-27 20:18:36
|
We really ought to nail this problem of child process termination in the deployed system and in the unit tests. There is sometimes a nasty gotcha with sudden process termination hooks. Bryan -----Original Message----- From: ble...@us... [mailto:ble...@us...] Sent: Monday, September 27, 2010 4:12 PM To: big...@li... Subject: [Bigdata-commit] SF.net SVN: bigdata:[3642] branches/maven_scaleout/bigdata-integ/src/test/ resources Revision: 3642 http://bigdata.svn.sourceforge.net/bigdata/?rev=3642&view=rev Author: blevine218 Date: 2010-09-27 20:12:07 +0000 (Mon, 27 Sep 2010) Log Message: ----------- add ServiceStarter to the list of Java processes that might be left over and should be forcibly killed. Modified Paths: -------------- branches/maven_scaleout/bigdata-integ/src/test/resources/kill_leftover_processes.sh branches/maven_scaleout/bigdata-integ/src/test/resources/services.xml Modified: branches/maven_scaleout/bigdata-integ/src/test/resources/kill_leftover_processes.sh =================================================================== --- branches/maven_scaleout/bigdata-integ/src/test/resources/kill_leftover_processes.sh 2010-09-27 20:01:43 UTC (rev 3641) +++ branches/maven_scaleout/bigdata-integ/src/test/resources/kill_leftover_processes.sh 2010-09-27 20:12:07 UTC (rev 3642) @@ -1,7 +1,11 @@ #! /bin/bash # -# Kill all instances of the java QuorumPeerMain process +# Kill all instances of the java QuorumPeerMain and ServiceStarter processes. +# This is a bit dangerous in a CI environment as we might have other +build in progress # that have launched these Java processes. This should suffice for the short-term though. +# # TODO: Make this more generic so that we can kill processes -# with other names as well +# with other names as well. # kill `jps | grep QuorumPeerMain | cut -d' ' -f1 ` +kill `jps | grep ServiceStarter | cut -d' ' -f1 ` Modified: branches/maven_scaleout/bigdata-integ/src/test/resources/services.xml =================================================================== --- branches/maven_scaleout/bigdata-integ/src/test/resources/services.xml 2010-09-27 20:01:43 UTC (rev 3641) +++ branches/maven_scaleout/bigdata-integ/src/test/resources/services.xml 2010-09-27 20:12:07 UTC (rev 3642) @@ -14,6 +14,10 @@ <target name="start"> <echo message="Starting support services..." /> + + <!-- Probably overkill, but here just in case the previous run + failed to kill the leftover processes for some reason --> + <antcall target="kill-leftover-processes" /> <antcall target="dumpProps" /> <antcall target="startHttpd" /> <antcall target="startLookup" /> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. ------------------------------------------------------------------------------ Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ Bigdata-commit mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-commit |
|
From: Bryan T. <br...@sy...> - 2010-09-27 12:11:26
|
Brian, I've linked the maven notes page [1] from the main wiki page [2]. Could either you or Sean accept the issue [3] so people know that it is in progress and update the issue weekly so other people can track what is happening in the branch? Thanks, Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=MavenNotes [3] https://sourceforge.net/apps/trac/bigdata/ticket/168 > -----Original Message----- > From: Brian Levine > Sent: Saturday, September 25, 2010 12:13 PM > To: Bryan Thompson > Subject: Re: [Bigdata-developers] (no subject) > > Hi, > > I've created a page here: > > https://sourceforge.net/apps/mediawiki/bigdata/index.php?title > =MavenNotes > > I didn't know whether you wanted to link to this page from > another page, so I left that to you. > > Thanks. > > -b > > > On 09/24/2010 06:06 PM, Bryan Thompson wrote: > > Brian, > > > > Thanks for posting this information. Would you consider > creating a wiki page for the maven integration to capture > this information? > > > > Thanks, > > Bryan > > ________________________________________ > > From: Brian Levine > > Sent: Friday, September 24, 2010 3:56 PM > > To: big...@li... > > Subject: [Bigdata-developers] (no subject) > > > > Hi all, > > > > Here are a few notes on the new bigdata-integ test module I > recently checked in to the maven_scaleout branch. > > > > Overview > > > > The main goal of the bigdata-integ module is to separate > "heavy-weight" tests from the standard unit tests that a > developer would run during a typical development cycle. Also > included are tests that require some external infrastructure > to be set up (e.g. a lookup service) prior to running the > tests. So the term "integration tests" is not particularly > accurate and is really a catch-all for anything that we don't > want to be part of the lighter-weight unit tests. As the > componentization of bigdata proceeds, we will see some of > these integration tests move back into their respective > components but still be separated from the light-weight unit > tests. Maven's separation of unit and integration tests into > separate phases of the build lifecycle makes this pretty > straightforward. > > > > We've only moved about 30-ish tests so far and we'll tackle > more going forward. > > > > How to run the integration tests > > > > The bigdata-integ module is now referenced in the bigdata > parent POM. If you run a build from the parent project, > you'll build bigdata-core, generate its artifacts including > the deployment tarball, and run the integration tests. Note > that the unit tests are currently not being run during the > build, but this should change soon. > > > > mvn clean install > > > > If you've already built bigdata-core and just want to run > the integration tests, you can run this from the > bigdata-integ directory: > > > > cd bigdata-integ > > mvn clean integration-test (mvn clean install will also work) > > > > How to not run the integration tests > > > > If you don't want to run the integration tests, it's easy. You can: > > > > > > - Run the build from the bigdata-core project rather > than the parent project. > > > > - OR from the parent project, add the -DskipITs > argument on the command-line (this skips integration tests > just as -DskipTests skips unit tests) > > > > - OR from the parent project, run mvn clean package > instead of mvn clean install. Since the integration-test > build phase comes after the package build phase, the > integration tests won't be executed. > > > > Thanks. > > > > -brian > > > |
|
From: Bryan T. <br...@sy...> - 2010-09-26 01:58:58
|
Sean, What follows is where I think the module divisions might be for a fine grained mavenization. I'd tried to give this in terms of increase dependency order. As you've noted there are a variety of places where an API module needs to be defined to break a cycle. This is a lot of possible divisions. However, we do not need JARs for all of this. For example, there could be one JAR per concrete service implementation without having jars for their abstract implementations. - commons-io, -util, -striterator, etc. - counters (but concrete implementations of counter stores will be backed by a RW journal). - store-api (IRawStore, IBigdataFederation, IResourceMetadata, etc). - btree (everything under com.bigdata.btree). - sparse (the key-value store, which is a dependency for journals and services). - query (the query engine and non-RDF specific query operators: com.bigdata.relation, com.bigdata.bop) - services-api (APIs for the various services) - services-async-api (asynchronous write pipeline APIs, which are used by the RDF bulk loader) - rdf - sail - journal (WORM, RW, etc). - journal-HA (journal + HA, which brings in jini and zookeeper) - services-async-impl (used for distributing jobs between masters and clients and for asynchronous index writes in the bulk loader). - services-tx,-mds,-ds,-lbs,-cs, etc. The abstract service implementations. The data service module would get com.bigdata.resources, which is the shard container stuff. - query-peer. The version of the query engine which is federation aware. The peer lives in the data service, but I could see cases where we might want to reuse this to have query against the performance counters in the load balancer. - services-jini-tx,-mds,-ds,-lbs,-cs, etc. the concrete implementations of the various services using jini. There are definately going to be explicit casts to AbstractJournal from the rdf/sail stuff, but I think that we can work through those. They are mainly for reporting out various interesting details. Eventually, I think that both the load balancer and the metadata service might be decomposed or deconstructed. The load balancer could be split into a stateless piece which queries a service that aggregates and manages performance counters for the federation. The metadata service could be decomposed into a P2P protocol with the data services self-reporting the shards for which they have responsibility. Thanks, Bryan |
|
From: Bryan T. <br...@sy...> - 2010-09-25 17:51:14
|
All, Attached is a worksheet containing: - A simple cost model for disk. - A cost model for the B+Tree, the index segment, and an approximate cost model for a shard. - A decision tree for named graph queries, including named graph handling in scale-out. - A decision tree for default graph queries, including default graph handling in scale-out. The decision trees cover several special cases, but there may well be others which can be optimized. The default graph decision tree is significantly more complicated due to the need to impost the DISTINCT constraint on the access paths. The proposed design for default graph query in scale-out is to use remote access paths, pulling the data to the node on which the join is running rather than pushing the binding sets to the nodes on which we the data rests for the next join. You specify this behavior by choosing the evaluation context for the join. SHARDED means the binding sets are mapped across the shards for local reads, in which case the access path is also annotated to specify a local access path. ANY means that the JOIN operator is executed on any node which has binding sets for that join. In this case the access path is always REMOTE so you can pull the data from the various nodes onto the node where that join is running. We might be able to eliminate duplicate access paths in scale-out by specifying "HASHED" (aka hash partitioned) rather than ANY for the default graph joins. I have not yet implemented the logic for hash partitioning the binding sets across the nodes yet, but it should be trivial. This file is in bigdata/src/architecture in the quads branch. I will update it once I get this logic into the query planner. Thanks, Bryan |
|
From: Bryan T. <br...@sy...> - 2010-09-24 22:07:43
|
Brian, Thanks for posting this information. Would you consider creating a wiki page for the maven integration to capture this information? Thanks, Bryan ________________________________________ From: Brian Levine [br...@br...] Sent: Friday, September 24, 2010 3:56 PM To: big...@li... Subject: [Bigdata-developers] (no subject) Hi all, Here are a few notes on the new bigdata-integ test module I recently checked in to the maven_scaleout branch. Overview The main goal of the bigdata-integ module is to separate “heavy-weight” tests from the standard unit tests that a developer would run during a typical development cycle. Also included are tests that require some external infrastructure to be set up (e.g. a lookup service) prior to running the tests. So the term “integration tests” is not particularly accurate and is really a catch-all for anything that we don’t want to be part of the lighter-weight unit tests. As the componentization of bigdata proceeds, we will see some of these integration tests move back into their respective components but still be separated from the light-weight unit tests. Maven’s separation of unit and integration tests into separate phases of the build lifecycle makes this pretty straightforward. We’ve only moved about 30-ish tests so far and we’ll tackle more going forward. How to run the integration tests The bigdata-integ module is now referenced in the bigdata parent POM. If you run a build from the parent project, you’ll build bigdata-core, generate its artifacts including the deployment tarball, and run the integration tests. Note that the unit tests are currently not being run during the build, but this should change soon. mvn clean install If you’ve already built bigdata-core and just want to run the integration tests, you can run this from the bigdata-integ directory: cd bigdata-integ mvn clean integration-test (mvn clean install will also work) How to not run the integration tests If you don’t want to run the integration tests, it’s easy. You can: - Run the build from the bigdata-core project rather than the parent project. - OR from the parent project, add the –DskipITs argument on the command-line (this skips integration tests just as –DskipTests skips unit tests) - OR from the parent project, run mvn clean package instead of mvn clean install. Since the integration-test build phase comes after the package build phase, the integration tests won’t be executed. Thanks. -brian |
|
From: Brian L. <br...@br...> - 2010-09-24 19:56:36
|
Hi all, Here are a few notes on the new bigdata-integ test module I recently checked in to the maven_scaleout branch. *Overview* The main goal of the bigdata-integ module is to separate “heavy-weight” tests from the standard unit tests that a developer would run during a typical development cycle. Also included are tests that require some external infrastructure to be set up (e.g. a lookup service) prior to running the tests. So the term “integration tests” is not particularly accurate and is really a catch-all for anything that we don’t want to be part of the lighter-weight unit tests. As the componentization of bigdata proceeds, we will see some of these integration tests move back into their respective components but still be separated from the light-weight unit tests. Maven’s separation of unit and integration tests into separate phases of the build lifecycle makes this pretty straightforward. We’ve only moved about 30-ish tests so far and we’ll tackle more going forward. *How to run the integration tests* The bigdata-integ module is now referenced in the bigdata parent POM. If you run a build from the parent project, you’ll build bigdata-core, generate its artifacts including the deployment tarball, and run the integration tests. Note that the unit tests are currently not being run during the build, but this should change soon. mvn clean install If you’ve already built bigdata-core and just want to run the integration tests, you can run this from the bigdata-integ directory: cd bigdata-integ mvn clean integration-test (mvn clean install will also work) *How to not run the integration tests* If you don’t want to run the integration tests, it’s easy. You can: - Run the build from the bigdata-core project rather than the parent project. - OR from the parent project, add the *–DskipITs* argument on the command-line (this skips integration tests just as –DskipTests skips unit tests) - OR from the parent project, run mvn clean package instead of mvn clean install. Since the integration-test build phase comes after the package build phase, the integration tests won’t be executed. Thanks. -brian |
|
From: Bryan T. <br...@sy...> - 2010-09-14 19:09:02
|
Martyn, I've gotten up to 1.7B loaded so far on the CI Perf machine into the RW Store. Overall, I think that we are ready to bring the RWStore into the trunk. I'm just looking for a spare moment to do that and then we can close out that branch. However, I would like to do a bit more scale testing for the RW store. Given that we have so much better performance with the RWStore on SAS (due to the ability of SAS to merge writes based on the disk geometry), we should run the scaling tests on machines with SAS disks. Luckily, the 8 node cluster has SAS disks (300GB 10K 6G 2.5 SAS DP HDD). The cluster nodes listed below have four 300Gb drives setup with hardware RAID0. LVM is also layered over this, giving effectively one ~1.2Tb partition. This suggests that we could run scaling tests on the RW Store on nodes 206-209 as follows: xx.xx.xx.206 uniprot (2.5B) xx.xx.xx.207 LUBM 8000 (1B) xx.xx.xx.208 BTC 2009 (1.14B) xx.xx.xx.209 BTC 2010 (3.2B) Those data sets are currently on the CI Perf machine in /data. We should also generate a larger LUBM data set, e.g., LUBM 100000. The scaling tests themselves are easy to set up (just checkout bigdata from SVN and use the templates in the bigdata-perf module) and can be run disconnected using nohup. Please also nohup vmstat (vmstat -n 60) and iostat (iostat -x -N -d -k 60 -t) so we can get a good sense of how the IO system performs over time. You have to do that in separate directories since each nohup process will write on 'nohup.out' in the local directory. I'd like to get these runs done over the next week or so since we will need to have the cluster clear when we get into testing the quads query branch. Thanks, Bryan |
|
From: Bryan T. <br...@sy...> - 2010-09-10 17:02:12
|
All,
Martyn and I have been running some performance tests against the RW Journal mode and are currently running it through some larger data sets loads (several billion triples). Things are looking good and we plan to bring the RWStore into the trunk shortly.
The RW Journal mode provides a nice complement to the current WORM journal. Like the WORM, it is able to read from historical commit points (version history). However, once the history retention period expires, it will release older database revisions and reuse the space on the disk allocated to those recycled revisions. When the retention time is set to zero milliseconds, the RW mode can efficiently recycle allocations on the disk and has a much smaller footprint than the WORM (6x less space on the disk in the table below). One advantage of the RW mode is that the recycling of allocations on the disk makes it practical to use much larger branching factors when compared to the WORM. This accounts for the query performance difference in the table below between the WORM and RW journal modes. We have nearly 2x the query performance in the cold disk condition when the branching factor is raised from m=32 to m=128. (We see this same effect in scale-out, which uses larger branching factors for the index segments generated by dynamic sharding).
The table below gives some results on BSBM 100M for the RW and WORM journal modes. These runs are using the BSBM reduced query mix with query 3 excluded. Query 3 is being delegated to Sesame and runs very slowly, which is why it is excluded from these runs. We will publish official BSBM results once we run query 3 natively -- ideally sometime late this month as part of evaluating the refactored query engine.
The BSBM metric is QMpH (Query Mixes per Hour). Higher is better. The best published score on BSBM 100M for 4 concurrent clients using the reduced query mix is 2822. You can not directly compare the performance against that published number because the hardware, the #of concurrent clients, and the query mixtures are different. However, I think that the scores below may be easily seen to be highly competitive.
There are three QMpH columns. The cold disk condition was achieved by dropping the operating system file cache and is the baseline for this table. BSBM forumlates the query mixes using a random number seed. The "Different Seed" condition shows what happens when the disk cache is warm and we change the seed, which changes the query mix. The "Same Seed" condition shows what happens when the disk cache is warm and we re-run the benchmark a second time using the same seed. Performance in the "Different Seed" column is improved over the baseline due to overlap in query requsts between the sets of query mixes. The "Same Seed" condition shows the potential throughput when most queries do not touch the disk. Disk utilization for all three conditions is between 98%+ as measured by iostat.
When running against Solid State Disk (SSD) we capture most of the "Same Seed" performance gain. SSD is 8x faster for the Cold Disk condition and 2x faster for the.Different Seed condition.
Finally, BSBM has a lot of relatively large literals (greater than 1KB in length). It is interesting to note that reducing the branching factor on the lexicon indices improves the load performance, but results in poorer query performance.
Thanks,
Bryan
BSBM 100M with 8 concurrent clients.
QMpH
Journal Mode Load Parameters Load Rate Disk Space (GB) Cold Disk Different Seed Same Seed
WORM q=8000, m=32 13,074 125 4234 8022 39467
RW q=4000, m=128. 12,781 21 7468 12947 37804
RW q=4000, m=128. T2ID and ID2T overridden to m=32. 16,307 21 5631 9740 36731
q is the write retention queue capacity.
m is the default branching factor
The load rate is in triples per second.
Only 50 warmup trials were used.
The JVM is hot in all runs.
The Same Seed condition reports on running the benchmark twice in a row using the same seed in the second run.
The machine is a quad core AMD Phenom II X4 with 8MB Cache @ 3Ghz running Centos with 16G of RAM and a striped RAID array with SATA 15k spindles (Seagate Cheetah with 16MV Cache, 3.5").
|
|
From: Sean G. <sea...@no...> - 2010-09-09 15:57:20
|
On the PortUtil topic:
It is customary for applications to use well known ports for well known
protocols (http, ssh, etc), but use sequentially offset ports for each
custom protocol. Typically the default starting port is one a regular
user could bind on most platforms, somewhere between 1025 and 9999. If
the protocol only supports a single connection between two machines it
is also customary for the initiator to bind both local and remote ports
to the same number. This is common with peer-to-peer and routing
protocols, but client-server protocols tend to use 'random' ports for
the local port.
Something like:
int BIGDATA_STARTING_PORT =
Integer.parseValue(System.getProperty("bigdata.starting.port","5530"));
int BULK_TRANSFER_PORT = BIGDATA_STARTING_PORT + 0;
int HA_HEARTBEAT_PORT = BIGDATA_STARTING_PORT + 1;
int HA_DOWNSTREAM_PORT = BIGDATA_STARTING_PORT + 2;
These conventions allow network administrators to quickly setup firewall
rules using simple constructs like 'allow all traffic between
application nodes with a destination port between 5530 and 5532'.
One tricky bit is that this makes running multiple instances of the same
app on a single box difficult. Getting each app to bind to a different
port range is a simple config problem, but knowing the ports for other
apps is more difficult. Often this solved by creating globally shared
config that all apps can read. I personally find this approach
distasteful since globally shared state usually impacts scalability and
fault tolerance. Since BigData uses JINI, a more robust approach is
for each listener to publish port info as a JINI attribute in the
registry. One step better is for listeners to publish smart proxies,
which can completely insulate clients from the listener's ip, listening
port, wire protocol, etc...
Sean
On 09/09/2010 09:41 AM, ext Brian Murphy wrote:
> On Wed, Sep 8, 2010 at 8:18 AM, Bryan Thompson <br...@sy...
> <mailto:br...@sy...>> wrote:
>
> I am wondering how we should construct and expose for
> configuration ServerSockets for custom data transfer services.
> Right now we have two such services which open ServerSockets to
> do NIO operations on ByteBuffers.
>
> public ServerSocket(int port, int backlog, InetAddress
> bindAddr) throws IOException;
>
> This allows us to specify the port (or pass 0 for a random port)
> and to specify the local IP address on which the service will
> communicate. T
>
> it could also be nice to have the service talking on a configured
> port (for example, in order to simplify firewall configuration).
>
>
> It seems that the pattern in use right now to specify the
> INetAddress is:
>
> InetAddress.getByName(
> NicUtil.getIpAddress("default.nic"/*systemPropertyName*/,
> "default"/*defaultNicName*/, false/*loopback*/)
> );
>
> Presumably, the "default.nic" value could be replaced by the name
> of a different system property if we wanted to bind high volume
> data transfers onto a specific.
>
>
> It that is the intention, then perhaps we should define an
> alternative system property name for those cases and begin to use
> it at the appropriate locations in the code base.
>
>
> Yes, "default.nic" could be replaced with a different value.
> That value was created as a special value when you requested
> that a mechanism be added for telling the configuration that you
> would prefer that the system's default nic be used, whatever it
> may be, rather than a specific nic name. This was needed because
> the default nic on Windows cannot generally be know a priori, and
> may actually change upon reboot. Whereas the use of this
> 'default.nic' mechanism is very convenient for development, it is
> not necessarily the best solution for real deployment scenarios;
> where there is quite a bit more control of the environment, where
> there may be multiple nics one wishes to configure, and where
> a priori knowledge can be exploited to make life easier for the
> deployer in the data center.
>
> For example, if you look at my personal branch, you will see a class
> named ConfigDeployUtil that is used in each smart proxy's config
> file to retrieve things like the groups or nics or ports, from a single,
> top-level deployer-oriented config file that is in the form of a
> name=value text file rather than a jini config (which deployer's seem
> to prefer over the jini config format). The ConfigDeployUtil class
> makes certain assumptions about its environment. So, although that
> class is not very general purpose, it's intent is to shield the deployer
> from the jini config files; thus making the life of the deployer much
> simpler.
>
> I wouldn't necessarily expect ConfigDeployUtil to be used with the
> ServicesManagerService mechanism. And there are numerous
> options for what may be used to replace the "default.nic" parameter
> above; setting system properties being one such option.
>
> However, this still leaves unspecified a pattern to configure the
> port for the service. Would you envision a "PortUtil" (or an
> extension to NicUtil) which uses a similar pattern specifying the
> system property name and the default port name (or port number)?
>
>
> I don't think a utility such as a "PortUtil" is necessary. The purpose of
> the NicUtil was to provide code that can be executed from either
> the config file itself, or from other code in the system; allowing one
> to retrieve IP or Inet addresses based on a given network interface
> name. This provides a way then, for avoiding the configuration of
> specific IP addresses; instead configuring nic names, which vary far
> less greatly than IP addresses do.
>
> With respect to port numbers, it seems like a simple 'port = value'
> should suffice; whether you're using jini configs or any other sort
> of config file.
>
> BrianM
>
|
|
From: Brian M. <btm...@gm...> - 2010-09-09 13:41:43
|
On Wed, Sep 8, 2010 at 8:18 AM, Bryan Thompson <br...@sy...> wrote:
I am wondering how we should construct and expose for configuration
> ServerSockets for custom data transfer services. Right now we have two such
> services which open ServerSockets to do NIO operations on ByteBuffers.
>
> public ServerSocket(int port, int backlog, InetAddress bindAddr) throws
> IOException;
>
> This allows us to specify the port (or pass 0 for a random port) and to
> specify the local IP address on which the service will communicate. T
> it could also be nice to have the service talking on a configured port (for
> example, in order to simplify firewall configuration).
>
It seems that the pattern in use right now to specify the INetAddress is:
>
> InetAddress.getByName(
> NicUtil.getIpAddress("default.nic"/*systemPropertyName*/,
> "default"/*defaultNicName*/, false/*loopback*/)
> );
>
> Presumably, the "default.nic" value could be replaced by the name of a
> different system property if we wanted to bind high volume data transfers
> onto a specific.
> It that is the intention, then perhaps we should define an alternative
> system property name for those cases and begin to use it at the appropriate
> locations in the code base.
>
Yes, "default.nic" could be replaced with a different value.
That value was created as a special value when you requested
that a mechanism be added for telling the configuration that you
would prefer that the system's default nic be used, whatever it
may be, rather than a specific nic name. This was needed because
the default nic on Windows cannot generally be know a priori, and
may actually change upon reboot. Whereas the use of this
'default.nic' mechanism is very convenient for development, it is
not necessarily the best solution for real deployment scenarios;
where there is quite a bit more control of the environment, where
there may be multiple nics one wishes to configure, and where
a priori knowledge can be exploited to make life easier for the
deployer in the data center.
For example, if you look at my personal branch, you will see a class
named ConfigDeployUtil that is used in each smart proxy's config
file to retrieve things like the groups or nics or ports, from a single,
top-level deployer-oriented config file that is in the form of a
name=value text file rather than a jini config (which deployer's seem
to prefer over the jini config format). The ConfigDeployUtil class
makes certain assumptions about its environment. So, although that
class is not very general purpose, it's intent is to shield the deployer
from the jini config files; thus making the life of the deployer much
simpler.
I wouldn't necessarily expect ConfigDeployUtil to be used with the
ServicesManagerService mechanism. And there are numerous
options for what may be used to replace the "default.nic" parameter
above; setting system properties being one such option.
> However, this still leaves unspecified a pattern to configure the port for
> the service. Would you envision a "PortUtil" (or an extension to NicUtil)
> which uses a similar pattern specifying the system property name and the
> default port name (or port number)?
>
I don't think a utility such as a "PortUtil" is necessary. The purpose of
the NicUtil was to provide code that can be executed from either
the config file itself, or from other code in the system; allowing one
to retrieve IP or Inet addresses based on a given network interface
name. This provides a way then, for avoiding the configuration of
specific IP addresses; instead configuring nic names, which vary far
less greatly than IP addresses do.
With respect to port numbers, it seems like a simple 'port = value'
should suffice; whether you're using jini configs or any other sort
of config file.
BrianM
|
|
From: Bryan T. <br...@sy...> - 2010-09-08 12:31:52
|
I am also curious if there is a difference between an "ephemeral port" and a "free port". Ephemeral ports appear to be ports in a specific range (which might vary by the OS) and which are used for short lived service connections. If the service is to be long lived, should the free port be drawn from a different port range?
Thanks,
Bryan
> -----Original Message-----
> From: Bryan Thompson [mailto:br...@sy...]
> Sent: Wednesday, September 08, 2010 8:19 AM
> To: big...@li...
> Subject: [Bigdata-developers] How to configure the port and
> IP address for ServerSockets using NicUtil?
>
> Brian,
>
> I am wondering how we should construct and expose for
> configuration ServerSockets for custom data transfer
> services. Right now we have two such services which open
> ServerSockets to do NIO operations on ByteBuffers. These
> are the HA write replication pipeline and the ResourceService
> used to send shards around. I am currently extending the
> ResourceService to also ship around the intermediate results
> during distributed query evaluation.
>
> Reading the javadoc, it would appear that we should be using
> the following form of the ServerSocket contructor:
>
> public ServerSocket(int port, int backlog, InetAddress
> bindAddr) throws IOException;
>
> This allows us to specify the port (or pass 0 for a random
> port) and to specify the local IP address on which the
> service will communicate. The backlog can be specified as 0
> to use the default. For both of these services the port may
> be random since it will be disclosed by an RMI message, but
> it could also be nice to have the service talking on a
> configured port (for example, in order to simplify firewall
> configuration).
>
> It seems that the pattern in use right now to specify the
> INetAddress is:
>
> InetAddress.getByName(
> NicUtil.getIpAddress("default.nic"/*systemPropertyName*/,
> "default"/*defaultNicName*/, false/*loopback*/)
> );
>
> Presumably, the "default.nic" value could be replaced by the
> name of a different system property if we wanted to bind high
> volume data transfers onto a specific. It that is the
> intention, then perhaps we should define an alternative
> system property name for those cases and begin to use it at
> the appropriate locations in the code base.
>
> However, this still leaves unspecified a pattern to configure
> the port for the service. Would you envision a "PortUtil"
> (or an extension to NicUtil) which uses a similar pattern
> specifying the system property name and the default port name
> (or port number)?
>
> Thanks,
> Bryan
>
> --------------------------------------------------------------
> ----------------
> This SF.net Dev2Dev email is sponsored by:
>
> Show off your parallel programming skills.
> Enter the Intel(R) Threading Challenge 2010.
> http://p.sf.net/sfu/intel-thread-sfd
> _______________________________________________
> Bigdata-developers mailing list
> Big...@li...
> https://lists.sourceforge.net/lists/listinfo/bigdata-developers
>
|
|
From: Bryan T. <br...@sy...> - 2010-09-08 12:19:36
|
Brian,
I am wondering how we should construct and expose for configuration ServerSockets for custom data transfer services. Right now we have two such services which open ServerSockets to do NIO operations on ByteBuffers. These are the HA write replication pipeline and the ResourceService used to send shards around. I am currently extending the ResourceService to also ship around the intermediate results during distributed query evaluation.
Reading the javadoc, it would appear that we should be using the following form of the ServerSocket contructor:
public ServerSocket(int port, int backlog, InetAddress bindAddr) throws IOException;
This allows us to specify the port (or pass 0 for a random port) and to specify the local IP address on which the service will communicate. The backlog can be specified as 0 to use the default. For both of these services the port may be random since it will be disclosed by an RMI message, but it could also be nice to have the service talking on a configured port (for example, in order to simplify firewall configuration).
It seems that the pattern in use right now to specify the INetAddress is:
InetAddress.getByName(
NicUtil.getIpAddress("default.nic"/*systemPropertyName*/, "default"/*defaultNicName*/, false/*loopback*/)
);
Presumably, the "default.nic" value could be replaced by the name of a different system property if we wanted to bind high volume data transfers onto a specific. It that is the intention, then perhaps we should define an alternative system property name for those cases and begin to use it at the appropriate locations in the code base.
However, this still leaves unspecified a pattern to configure the port for the service. Would you envision a "PortUtil" (or an extension to NicUtil) which uses a similar pattern specifying the system property name and the default port name (or port number)?
Thanks,
Bryan
|
|
From: husdon <no...@no...> - 2010-09-03 20:49:58
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-09-03 20:09:10
|
See <http://localhost/job/BigData/changes> |
|
From: Bryan T. <br...@sy...> - 2010-09-03 19:16:40
|
Ok. I will make those changes. Thanks, Bryan ________________________________ From: Brian Murphy [mailto:btm...@gm...] Sent: Friday, September 03, 2010 3:12 PM To: big...@li... Subject: Re: [Bigdata-developers] [Bigdata-commit] SF.net SVN: bigdata:[3499] trunk/build.xml On Fri, Sep 3, 2010 at 12:04 PM, Bryan Thompson <br...@sy...<mailto:br...@sy...>> wrote: While I do not disagree with your comments about the usefulness and utility of jini in non-distributed situations, would it be possible to move this method into another class so that the bigdata "core" packages do not have a jini dependency? Rather than moving the method to another class, you should probably simply remove it. It was included as a convenience, and I'm pretty sure no one is using it. Note that you'll also want to remove the com.sun.jini.logging.Levels class as well. BrianM |