You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(414) |
Apr
(123) |
May
(448) |
Jun
(180) |
Jul
(17) |
Aug
(49) |
Sep
(3) |
Oct
(92) |
Nov
(101) |
Dec
(64) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(132) |
Feb
(230) |
Mar
(146) |
Apr
(146) |
May
|
Jun
|
Jul
(34) |
Aug
(4) |
Sep
(3) |
Oct
(10) |
Nov
(12) |
Dec
(24) |
2008 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(11) |
Nov
(4) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Bryan T. <tho...@us...> - 2007-04-13 16:48:09
|
Update of /cvsroot/cweb/bigdata/lib In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv15852/lib Modified Files: BytesUtil.dll Log Message: Attempting to update the JNI library for Windows. Index: BytesUtil.dll =================================================================== RCS file: /cvsroot/cweb/bigdata/lib/BytesUtil.dll,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 Binary files /tmp/cvsCbRo4K and /tmp/cvsRzijjt differ |
From: Bryan T. <tho...@us...> - 2007-04-13 16:48:09
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/btree In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv15852/src/java/com/bigdata/btree Modified Files: BytesUtil.java Log Message: Attempting to update the JNI library for Windows. Index: BytesUtil.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/btree/BytesUtil.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** BytesUtil.java 13 Apr 2007 15:55:12 -0000 1.2 --- BytesUtil.java 13 Apr 2007 16:48:05 -0000 1.3 *************** *** 13,20 **** * positions in that byte[] is maintained. * <p> ! * See BytesUtil.c in this package for instructions on compiling the JNI ! * methods. However, note that the JNI methods do not appear to be as fast as ! * the pure Java methods - presumably because of the overhead of going from Java ! * to C. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 13,17 ---- * positions in that byte[] is maintained. * <p> ! * See {@link #main(String[])} which provides a test for the JNI integration. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> *************** *** 585,588 **** --- 582,607 ---- * This method tries to execute the JNI methods. * + * See BytesUtil.c in this package for instructions on compiling the JNI + * methods. However, note that the JNI methods do not appear to be as fast + * as the pure Java methods - presumably because of the overhead of going + * from Java to C. + * <p> + * In order to use the JNI library under Windows, you must specify the JNI + * library location using the PATH environment variable, e.g., + * + * <pre> + * cd bigdata + * set PATH=%PATH%;lib + * java -cp bin com.bigdata.btree.BytesUtil + * </pre> + * + * <p> + * In order to use the JNI library under un*x, you must specify the JNI + * library location + * + * <pre> + * java -Djava.library.path=lib com.bigdata.btree.BytesUtil + * </pre> + * * @param args * |
From: Bryan T. <tho...@us...> - 2007-04-13 16:29:39
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/btree In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9139/src/java/com/bigdata/btree Modified Files: BytesUtil.c Log Message: update to instructions on how to compile the DLL for windows. Index: BytesUtil.c =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/btree/BytesUtil.c,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** BytesUtil.c 13 Apr 2007 15:04:12 -0000 1.1 --- BytesUtil.c 13 Apr 2007 16:29:35 -0000 1.2 *************** *** 36,40 **** * using the Microsoft Visual C++ compiler: ! cl "-I%JAVA_HOME%\include" "-I%JAVA_HOME%\include\win32" -LD BytesUtil.c -FeBytesUtil.dll other things tried, some of which may work or have useful optimizations: --- 36,40 ---- * using the Microsoft Visual C++ compiler: ! cl -I. "-I%JAVA_HOME%\include" "-I%JAVA_HOME%\include\win32" -LD src/java/com/bigdata/btree/BytesUtil.c -FeBytesUtil.dll other things tried, some of which may work or have useful optimizations: |
From: Bryan T. <tho...@us...> - 2007-04-13 15:55:17
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/btree In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv28891/src/java/com/bigdata/btree Modified Files: BytesUtil.java Log Message: Updated the linux JNI BytesUtil library, modified the main routine to run the JNI versions of the methods so that it provides a handy installation test. Index: BytesUtil.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/btree/BytesUtil.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** BytesUtil.java 13 Apr 2007 15:04:12 -0000 1.1 --- BytesUtil.java 13 Apr 2007 15:55:12 -0000 1.2 *************** *** 582,588 **** } public static void main(String[] args) { ! if( 0 != BytesUtil.compareBytes(new byte[]{1,2,3}, new byte[]{1,2,3}) ) { throw new AssertionError(); --- 582,598 ---- } + /** + * This method tries to execute the JNI methods. + * + * @param args + * + * @exception UnsatisfiedLinkError + * if the JNI methods can not be resolved. + * @exception AssertionError + * if the JNI methods do not produce the expected answers. + */ public static void main(String[] args) { ! if( 0 != BytesUtil._compareBytes(3, new byte[]{1,2,3}, 3, new byte[]{1,2,3}) ) { throw new AssertionError(); *************** *** 590,594 **** } ! if( 0 != BytesUtil.compareBytesWithLenAndOffset(0, 3, new byte[]{1,2,3}, 0, 3, new byte[]{1,2,3}) ) { throw new AssertionError(); --- 600,604 ---- } ! if( 0 != BytesUtil._compareBytesWithOffsetAndLen(0, 3, new byte[]{1,2,3}, 0, 3, new byte[]{1,2,3}) ) { throw new AssertionError(); |
From: Bryan T. <tho...@us...> - 2007-04-13 15:55:17
|
Update of /cvsroot/cweb/bigdata/lib In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv28891/lib Modified Files: libBytesUtil.so Log Message: Updated the linux JNI BytesUtil library, modified the main routine to run the JNI versions of the methods so that it provides a handy installation test. Index: libBytesUtil.so =================================================================== RCS file: /cvsroot/cweb/bigdata/lib/libBytesUtil.so,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 Binary files /tmp/cvs1gsBFg and /tmp/cvsL1FaYc differ |
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/service In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/service Modified Files: EmbeddedDataService.java IMapOp.java DataService.java IDataService.java IMetadataService.java DataServiceClient.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: DataService.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/DataService.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** DataService.java 22 Mar 2007 21:11:23 -0000 1.5 --- DataService.java 13 Apr 2007 15:04:19 -0000 1.6 *************** *** 57,75 **** import java.util.concurrent.Executors; import com.bigdata.journal.AbstractJournal; import com.bigdata.journal.ITx; import com.bigdata.journal.IsolationEnum; import com.bigdata.journal.Journal; - import com.bigdata.objndx.BatchContains; - import com.bigdata.objndx.BatchInsert; - import com.bigdata.objndx.BatchLookup; - import com.bigdata.objndx.BatchRemove; - import com.bigdata.objndx.IBatchBTree; - import com.bigdata.objndx.IBatchOp; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.ILinearList; - import com.bigdata.objndx.IReadOnlyBatchOp; - import com.bigdata.objndx.ISimpleBTree; import com.bigdata.util.concurrent.DaemonThreadFactory; --- 57,75 ---- import java.util.concurrent.Executors; + import com.bigdata.btree.BatchContains; + import com.bigdata.btree.BatchInsert; + import com.bigdata.btree.BatchLookup; + import com.bigdata.btree.BatchRemove; + import com.bigdata.btree.IBatchBTree; + import com.bigdata.btree.IBatchOp; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.ILinearList; + import com.bigdata.btree.IReadOnlyBatchOp; + import com.bigdata.btree.ISimpleBTree; import com.bigdata.journal.AbstractJournal; import com.bigdata.journal.ITx; import com.bigdata.journal.IsolationEnum; import com.bigdata.journal.Journal; import com.bigdata.util.concurrent.DaemonThreadFactory; Index: IDataService.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/IDataService.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** IDataService.java 10 Apr 2007 18:33:31 -0000 1.5 --- IDataService.java 13 Apr 2007 15:04:19 -0000 1.6 *************** *** 51,59 **** import java.util.concurrent.ExecutionException; import com.bigdata.journal.ITransactionManager; import com.bigdata.journal.ITxCommitProtocol; import com.bigdata.journal.IsolationEnum; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IBatchOp; import com.bigdata.service.DataService.RangeQueryResult; --- 51,59 ---- import java.util.concurrent.ExecutionException; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IBatchOp; import com.bigdata.journal.ITransactionManager; import com.bigdata.journal.ITxCommitProtocol; import com.bigdata.journal.IsolationEnum; import com.bigdata.service.DataService.RangeQueryResult; *************** *** 89,92 **** --- 89,97 ---- * can have more flexibility since they are under less of a latency * constraint. + * + * @todo add protocol / service version information to this interface and + * provide for life switch-over from service version to service version so + * that you can update or rollback the installed service versions with + * 100% uptime. */ public interface IDataService extends IRemoteTxCommitProtocol { Index: DataServiceClient.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/DataServiceClient.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** DataServiceClient.java 27 Mar 2007 14:34:24 -0000 1.4 --- DataServiceClient.java 13 Apr 2007 15:04:19 -0000 1.5 *************** *** 52,57 **** import java.util.concurrent.ExecutionException; import com.bigdata.journal.ValidationError; - import com.bigdata.objndx.IBatchOp; import com.bigdata.service.DataService.RangeQueryResult; --- 52,57 ---- import java.util.concurrent.ExecutionException; + import com.bigdata.btree.IBatchOp; import com.bigdata.journal.ValidationError; import com.bigdata.service.DataService.RangeQueryResult; Index: EmbeddedDataService.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/EmbeddedDataService.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** EmbeddedDataService.java 22 Mar 2007 21:11:23 -0000 1.3 --- EmbeddedDataService.java 13 Apr 2007 15:04:19 -0000 1.4 *************** *** 54,59 **** import java.util.concurrent.Executors; import com.bigdata.journal.ValidationError; - import com.bigdata.objndx.IBatchOp; import com.bigdata.service.DataService.RangeQueryResult; import com.bigdata.util.concurrent.DaemonThreadFactory; --- 54,59 ---- import java.util.concurrent.Executors; + import com.bigdata.btree.IBatchOp; import com.bigdata.journal.ValidationError; import com.bigdata.service.DataService.RangeQueryResult; import com.bigdata.util.concurrent.DaemonThreadFactory; Index: IMapOp.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/IMapOp.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** IMapOp.java 17 Mar 2007 23:14:58 -0000 1.2 --- IMapOp.java 13 Apr 2007 15:04:19 -0000 1.3 *************** *** 1,5 **** package com.bigdata.service; ! import com.bigdata.objndx.BytesUtil; /** --- 1,5 ---- package com.bigdata.service; ! import com.bigdata.btree.BytesUtil; /** Index: IMetadataService.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/IMetadataService.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** IMetadataService.java 27 Mar 2007 14:34:23 -0000 1.3 --- IMetadataService.java 13 Apr 2007 15:04:19 -0000 1.4 *************** *** 63,66 **** --- 63,68 ---- * compatible with RMI. * + * @todo extend IDataService + * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:32
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/scaleup Modified Files: ResourceState.java PartitionedIndexView.java PartitionMetadata.java Name2MetadataAddr.java MetadataIndex.java SlaveJournal.java MasterJournal.java IPartitionTask.java IsolatablePartitionedIndexView.java AbstractPartitionTask.java SegmentMetadata.java IResourceMetadata.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: IPartitionTask.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IPartitionTask.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** IPartitionTask.java 8 Mar 2007 18:14:06 -0000 1.1 --- IPartitionTask.java 13 Apr 2007 15:04:24 -0000 1.2 *************** *** 44,48 **** package com.bigdata.scaleup; ! import com.bigdata.objndx.IndexSegment; /** --- 44,48 ---- package com.bigdata.scaleup; ! import com.bigdata.btree.IndexSegment; /** Index: MetadataIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/MetadataIndex.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** MetadataIndex.java 27 Mar 2007 14:34:22 -0000 1.7 --- MetadataIndex.java 13 Apr 2007 15:04:24 -0000 1.8 *************** *** 52,62 **** import org.CognitiveWeb.extser.LongPacker; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BTreeMetadata; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.IRawStore; --- 52,62 ---- import org.CognitiveWeb.extser.LongPacker; + import com.bigdata.btree.BTree; + import com.bigdata.btree.BTreeMetadata; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; import com.bigdata.rawstore.IRawStore; Index: IResourceMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IResourceMetadata.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** IResourceMetadata.java 29 Mar 2007 17:01:33 -0000 1.3 --- IResourceMetadata.java 13 Apr 2007 15:04:24 -0000 1.4 *************** *** 47,53 **** import java.util.UUID; import com.bigdata.journal.Journal; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentMetadata; /** --- 47,53 ---- import java.util.UUID; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentMetadata; import com.bigdata.journal.Journal; /** Index: IsolatablePartitionedIndexView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IsolatablePartitionedIndexView.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** IsolatablePartitionedIndexView.java 29 Mar 2007 17:01:33 -0000 1.3 --- IsolatablePartitionedIndexView.java 13 Apr 2007 15:04:24 -0000 1.4 *************** *** 48,57 **** package com.bigdata.scaleup; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.IsolatableFusedView; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IBatchBTree; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.ReadOnlyFusedView; /** --- 48,57 ---- package com.bigdata.scaleup; + import com.bigdata.btree.IBatchBTree; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.ReadOnlyFusedView; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.IsolatableFusedView; import com.bigdata.isolation.UnisolatedBTree; /** Index: MasterJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/MasterJournal.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** MasterJournal.java 29 Mar 2007 17:01:33 -0000 1.6 --- MasterJournal.java 13 Apr 2007 15:04:24 -0000 1.7 *************** *** 54,57 **** --- 54,67 ---- import java.util.Properties; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IFusedView; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentBuilder; + import com.bigdata.btree.IndexSegmentFileStore; + import com.bigdata.btree.IndexSegmentMerger; + import com.bigdata.btree.IndexSegmentMerger.MergedEntryIterator; + import com.bigdata.btree.IndexSegmentMerger.MergedLeafIterator; import com.bigdata.cache.LRUCache; import com.bigdata.cache.WeakValueCache; *************** *** 67,80 **** import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr.Entry; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IFusedView; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentBuilder; - import com.bigdata.objndx.IndexSegmentFileStore; - import com.bigdata.objndx.IndexSegmentMerger; - import com.bigdata.objndx.IndexSegmentMerger.MergedEntryIterator; - import com.bigdata.objndx.IndexSegmentMerger.MergedLeafIterator; import com.bigdata.rawstore.Bytes; --- 77,80 ---- Index: AbstractPartitionTask.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/AbstractPartitionTask.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** AbstractPartitionTask.java 29 Mar 2007 17:01:33 -0000 1.5 --- AbstractPartitionTask.java 13 Apr 2007 15:04:24 -0000 1.6 *************** *** 48,61 **** import java.util.concurrent.Executors; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.isolation.Value; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.IValueSerializer; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentBuilder; - import com.bigdata.objndx.IndexSegmentMerger; - import com.bigdata.objndx.RecordCompressor; - import com.bigdata.objndx.IndexSegmentMerger.MergedEntryIterator; - import com.bigdata.objndx.IndexSegmentMerger.MergedLeafIterator; import com.bigdata.rawstore.Bytes; --- 48,61 ---- import java.util.concurrent.Executors; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.IValueSerializer; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentBuilder; + import com.bigdata.btree.IndexSegmentMerger; + import com.bigdata.btree.RecordCompressor; + import com.bigdata.btree.IndexSegmentMerger.MergedEntryIterator; + import com.bigdata.btree.IndexSegmentMerger.MergedLeafIterator; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.isolation.Value; import com.bigdata.rawstore.Bytes; Index: SlaveJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/SlaveJournal.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** SlaveJournal.java 29 Mar 2007 17:01:33 -0000 1.7 --- SlaveJournal.java 13 Apr 2007 15:04:24 -0000 1.8 *************** *** 48,59 **** import java.util.UUID; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Addr; --- 48,59 ---- import java.util.UUID; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr; import com.bigdata.rawstore.Addr; Index: SegmentMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/SegmentMetadata.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** SegmentMetadata.java 29 Mar 2007 17:01:33 -0000 1.5 --- SegmentMetadata.java 13 Apr 2007 15:04:24 -0000 1.6 *************** *** 46,50 **** import java.util.UUID; ! import com.bigdata.objndx.IndexSegment; /** --- 46,50 ---- import java.util.UUID; ! import com.bigdata.btree.IndexSegment; /** Index: PartitionedIndexView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionedIndexView.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** PartitionedIndexView.java 29 Mar 2007 17:01:33 -0000 1.3 --- PartitionedIndexView.java 13 Apr 2007 15:04:24 -0000 1.4 *************** *** 54,71 **** import java.util.UUID; import com.bigdata.journal.ICommitter; import com.bigdata.journal.Journal; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BatchContains; - import com.bigdata.objndx.BatchInsert; - import com.bigdata.objndx.BatchLookup; - import com.bigdata.objndx.BatchRemove; - import com.bigdata.objndx.EmptyEntryIterator; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IFusedView; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.ReadOnlyFusedView; /** --- 54,71 ---- import java.util.UUID; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.BTree; + import com.bigdata.btree.BatchContains; + import com.bigdata.btree.BatchInsert; + import com.bigdata.btree.BatchLookup; + import com.bigdata.btree.BatchRemove; + import com.bigdata.btree.EmptyEntryIterator; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IFusedView; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.ReadOnlyFusedView; import com.bigdata.journal.ICommitter; import com.bigdata.journal.Journal; /** Index: PartitionMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionMetadata.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** PartitionMetadata.java 12 Apr 2007 23:59:35 -0000 1.6 --- PartitionMetadata.java 13 Apr 2007 15:04:24 -0000 1.7 *************** *** 50,58 **** import java.util.UUID; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.Journal; - import com.bigdata.objndx.DataOutputBuffer; - import com.bigdata.objndx.IValueSerializer; - import com.bigdata.objndx.IndexSegment; /** --- 50,58 ---- import java.util.UUID; + import com.bigdata.btree.DataOutputBuffer; + import com.bigdata.btree.IValueSerializer; + import com.bigdata.btree.IndexSegment; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.Journal; /** Index: Name2MetadataAddr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/Name2MetadataAddr.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** Name2MetadataAddr.java 12 Mar 2007 18:06:12 -0000 1.5 --- Name2MetadataAddr.java 13 Apr 2007 15:04:24 -0000 1.6 *************** *** 48,53 **** package com.bigdata.scaleup; import com.bigdata.journal.Name2Addr; - import com.bigdata.objndx.BTreeMetadata; import com.bigdata.rawstore.IRawStore; --- 48,53 ---- package com.bigdata.scaleup; + import com.bigdata.btree.BTreeMetadata; import com.bigdata.journal.Name2Addr; import com.bigdata.rawstore.IRawStore; Index: ResourceState.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/ResourceState.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ResourceState.java 8 Mar 2007 18:14:06 -0000 1.1 --- ResourceState.java 13 Apr 2007 15:04:24 -0000 1.2 *************** *** 44,48 **** package com.bigdata.scaleup; ! import com.bigdata.objndx.IndexSegment; /** --- 44,48 ---- package com.bigdata.scaleup; ! import com.bigdata.btree.IndexSegment; /** |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:32
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/test/com/bigdata/scaleup Modified Files: TestPartitionedJournal.java TestPartitionedIndex.java TestMetadataIndex.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: TestMetadataIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestMetadataIndex.java,v retrieving revision 1.14 retrieving revision 1.15 diff -C2 -d -r1.14 -r1.15 *** TestMetadataIndex.java 29 Mar 2007 17:01:34 -0000 1.14 --- TestMetadataIndex.java 13 Apr 2007 15:04:24 -0000 1.15 *************** *** 56,74 **** import org.apache.log4j.Level; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; import com.bigdata.journal.Options; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.AbstractBTreeTestCase; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BatchInsert; - import com.bigdata.objndx.ReadOnlyFusedView; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentBuilder; - import com.bigdata.objndx.IndexSegmentFileStore; - import com.bigdata.objndx.IndexSegmentMerger; - import com.bigdata.objndx.SimpleEntry; - import com.bigdata.objndx.IndexSegmentMerger.MergedEntryIterator; - import com.bigdata.objndx.IndexSegmentMerger.MergedLeafIterator; import com.bigdata.rawstore.Bytes; import com.bigdata.rawstore.IRawStore; --- 56,74 ---- import org.apache.log4j.Level; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.AbstractBTreeTestCase; + import com.bigdata.btree.BTree; + import com.bigdata.btree.BatchInsert; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentBuilder; + import com.bigdata.btree.IndexSegmentFileStore; + import com.bigdata.btree.IndexSegmentMerger; + import com.bigdata.btree.ReadOnlyFusedView; + import com.bigdata.btree.SimpleEntry; + import com.bigdata.btree.IndexSegmentMerger.MergedEntryIterator; + import com.bigdata.btree.IndexSegmentMerger.MergedLeafIterator; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; import com.bigdata.journal.Options; import com.bigdata.rawstore.Bytes; import com.bigdata.rawstore.IRawStore; Index: TestPartitionedIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestPartitionedIndex.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestPartitionedIndex.java 11 Mar 2007 11:42:47 -0000 1.3 --- TestPartitionedIndex.java 13 Apr 2007 15:04:24 -0000 1.4 *************** *** 48,55 **** package com.bigdata.scaleup; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.AbstractBTreeTestCase; - import com.bigdata.objndx.IndexSegmentBuilder; - import com.bigdata.objndx.IndexSegmentMerger; /** --- 48,55 ---- package com.bigdata.scaleup; + import com.bigdata.btree.AbstractBTreeTestCase; + import com.bigdata.btree.IndexSegmentBuilder; + import com.bigdata.btree.IndexSegmentMerger; import com.bigdata.isolation.UnisolatedBTree; /** Index: TestPartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestPartitionedJournal.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestPartitionedJournal.java 29 Mar 2007 17:01:34 -0000 1.9 --- TestPartitionedJournal.java 13 Apr 2007 15:04:24 -0000 1.10 *************** *** 55,66 **** import junit.framework.TestCase2; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.Journal; - import com.bigdata.objndx.AbstractBTreeTestCase; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BatchInsert; - import com.bigdata.objndx.ByteArrayValueSerializer; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.SimpleMemoryRawStore; import com.bigdata.scaleup.MasterJournal.MergePolicy; --- 55,66 ---- import junit.framework.TestCase2; + import com.bigdata.btree.AbstractBTreeTestCase; + import com.bigdata.btree.BTree; + import com.bigdata.btree.BatchInsert; + import com.bigdata.btree.ByteArrayValueSerializer; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.KeyBuilder; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.Journal; import com.bigdata.rawstore.SimpleMemoryRawStore; import com.bigdata.scaleup.MasterJournal.MergePolicy; |
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/journal Modified Files: TemporaryStore.java CommitRecordIndex.java ResourceManager.java Tx.java ReadCommittedTx.java ITx.java IIndexManager.java AbstractBufferStrategy.java ITransactionManager.java Name2Addr.java IJournal.java Options.java IIndexStore.java AbstractJournal.java RootBlockView.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: ReadCommittedTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ReadCommittedTx.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** ReadCommittedTx.java 27 Mar 2007 14:34:23 -0000 1.4 --- ReadCommittedTx.java 13 Apr 2007 15:04:23 -0000 1.5 *************** *** 50,61 **** import java.util.UUID; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.IIsolatedIndex; - import com.bigdata.objndx.BatchContains; - import com.bigdata.objndx.BatchInsert; - import com.bigdata.objndx.BatchLookup; - import com.bigdata.objndx.BatchRemove; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IIndex; /** --- 50,61 ---- import java.util.UUID; + import com.bigdata.btree.BatchContains; + import com.bigdata.btree.BatchInsert; + import com.bigdata.btree.BatchLookup; + import com.bigdata.btree.BatchRemove; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.IIsolatedIndex; /** Index: IJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IJournal.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** IJournal.java 29 Mar 2007 17:01:32 -0000 1.11 --- IJournal.java 13 Apr 2007 15:04:23 -0000 1.12 *************** *** 50,54 **** import java.util.Properties; ! import com.bigdata.objndx.IIndex; /** --- 50,54 ---- import java.util.Properties; ! import com.bigdata.btree.IIndex; /** Index: Name2Addr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Name2Addr.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** Name2Addr.java 12 Apr 2007 23:59:34 -0000 1.9 --- Name2Addr.java 13 Apr 2007 15:04:23 -0000 1.10 *************** *** 13,22 **** import org.CognitiveWeb.extser.ShortPacker; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.BTreeMetadata; ! import com.bigdata.objndx.DataOutputBuffer; ! import com.bigdata.objndx.IIndex; ! import com.bigdata.objndx.IValueSerializer; ! import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.IRawStore; --- 13,22 ---- import org.CognitiveWeb.extser.ShortPacker; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.BTreeMetadata; ! import com.bigdata.btree.DataOutputBuffer; ! import com.bigdata.btree.IIndex; ! import com.bigdata.btree.IValueSerializer; ! import com.bigdata.btree.KeyBuilder; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.IRawStore; Index: ITx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITx.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** ITx.java 15 Mar 2007 16:11:12 -0000 1.7 --- ITx.java 13 Apr 2007 15:04:23 -0000 1.8 *************** *** 48,53 **** package com.bigdata.journal; import com.bigdata.isolation.IsolatedBTree; - import com.bigdata.objndx.IIndex; /** --- 48,53 ---- package com.bigdata.journal; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.IsolatedBTree; /** Index: CommitRecordIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** CommitRecordIndex.java 12 Apr 2007 23:59:34 -0000 1.5 --- CommitRecordIndex.java 13 Apr 2007 15:04:23 -0000 1.6 *************** *** 8,18 **** import org.CognitiveWeb.extser.ShortPacker; import com.bigdata.cache.LRUCache; import com.bigdata.cache.WeakValueCache; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BTreeMetadata; - import com.bigdata.objndx.DataOutputBuffer; - import com.bigdata.objndx.IValueSerializer; - import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.IRawStore; --- 8,18 ---- import org.CognitiveWeb.extser.ShortPacker; + import com.bigdata.btree.BTree; + import com.bigdata.btree.BTreeMetadata; + import com.bigdata.btree.DataOutputBuffer; + import com.bigdata.btree.IValueSerializer; + import com.bigdata.btree.KeyBuilder; import com.bigdata.cache.LRUCache; import com.bigdata.cache.WeakValueCache; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.IRawStore; Index: IIndexManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IIndexManager.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** IIndexManager.java 11 Mar 2007 11:42:46 -0000 1.3 --- IIndexManager.java 13 Apr 2007 15:04:23 -0000 1.4 *************** *** 48,53 **** package com.bigdata.journal; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.IIndex; /** --- 48,53 ---- package com.bigdata.journal; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.IIndex; /** Index: ITransactionManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITransactionManager.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** ITransactionManager.java 15 Mar 2007 16:11:12 -0000 1.5 --- ITransactionManager.java 13 Apr 2007 15:04:23 -0000 1.6 *************** *** 48,54 **** package com.bigdata.journal; import com.bigdata.isolation.IConflictResolver; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IndexSegment; /** --- 48,54 ---- package com.bigdata.journal; + import com.bigdata.btree.IndexSegment; import com.bigdata.isolation.IConflictResolver; import com.bigdata.isolation.UnisolatedBTree; /** Index: AbstractJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractJournal.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** AbstractJournal.java 10 Apr 2007 18:33:31 -0000 1.11 --- AbstractJournal.java 13 Apr 2007 15:04:23 -0000 1.12 *************** *** 64,67 **** --- 64,72 ---- import org.apache.log4j.Logger; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.ReadOnlyIndex; import com.bigdata.cache.LRUCache; import com.bigdata.cache.WeakValueCache; *************** *** 69,77 **** import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.ReadCommittedTx.ReadCommittedIndex; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.ReadOnlyIndex; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; --- 74,77 ---- Index: Options.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Options.java,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** Options.java 4 Apr 2007 16:52:16 -0000 1.12 --- Options.java 13 Apr 2007 15:04:23 -0000 1.13 *************** *** 48,52 **** import java.util.Properties; ! import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Bytes; --- 48,52 ---- import java.util.Properties; ! import com.bigdata.btree.IndexSegment; import com.bigdata.rawstore.Bytes; Index: Tx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Tx.java,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -d -r1.37 -r1.38 *** Tx.java 15 Mar 2007 16:11:12 -0000 1.37 --- Tx.java 13 Apr 2007 15:04:23 -0000 1.38 *************** *** 52,61 **** import java.util.Map; import com.bigdata.isolation.IIsolatedIndex; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.isolation.ReadOnlyIsolatedIndex; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IIndex; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.PartitionedIndexView; --- 52,61 ---- import java.util.Map; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.IIsolatedIndex; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.isolation.ReadOnlyIsolatedIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.PartitionedIndexView; Index: TemporaryStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TemporaryStore.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TemporaryStore.java 27 Mar 2007 14:34:23 -0000 1.9 --- TemporaryStore.java 13 Apr 2007 15:04:23 -0000 1.10 *************** *** 50,56 **** import java.util.UUID; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.ByteArrayValueSerializer; ! import com.bigdata.objndx.IIndex; import com.bigdata.rawstore.Addr; --- 50,56 ---- import java.util.UUID; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.ByteArrayValueSerializer; ! import com.bigdata.btree.IIndex; import com.bigdata.rawstore.Addr; Index: RootBlockView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/RootBlockView.java,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -d -r1.16 -r1.17 *** RootBlockView.java 29 Mar 2007 17:01:33 -0000 1.16 --- RootBlockView.java 13 Apr 2007 15:04:23 -0000 1.17 *************** *** 51,55 **** import java.util.UUID; ! import com.bigdata.objndx.BTreeMetadata; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; --- 51,55 ---- import java.util.UUID; ! import com.bigdata.btree.BTreeMetadata; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; Index: ResourceManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ResourceManager.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** ResourceManager.java 10 Apr 2007 18:33:31 -0000 1.3 --- ResourceManager.java 13 Apr 2007 15:04:23 -0000 1.4 *************** *** 53,59 **** import org.apache.log4j.Logger; ! import com.bigdata.objndx.AbstractBTree; ! import com.bigdata.objndx.IndexSegment; ! import com.bigdata.objndx.IndexSegmentBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MasterJournal; --- 53,59 ---- import org.apache.log4j.Logger; ! import com.bigdata.btree.AbstractBTree; ! import com.bigdata.btree.IndexSegment; ! import com.bigdata.btree.IndexSegmentBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MasterJournal; Index: AbstractBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -d -r1.16 -r1.17 *** AbstractBufferStrategy.java 4 Apr 2007 16:52:16 -0000 1.16 --- AbstractBufferStrategy.java 13 Apr 2007 15:04:23 -0000 1.17 *************** *** 9,13 **** import org.apache.log4j.Logger; ! import com.bigdata.objndx.AbstractBTree; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; --- 9,13 ---- import org.apache.log4j.Logger; ! import com.bigdata.btree.AbstractBTree; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; Index: IIndexStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IIndexStore.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** IIndexStore.java 11 Mar 2007 11:42:46 -0000 1.1 --- IIndexStore.java 13 Apr 2007 15:04:23 -0000 1.2 *************** *** 48,52 **** package com.bigdata.journal; ! import com.bigdata.objndx.IIndex; /** --- 48,52 ---- package com.bigdata.journal; ! import com.bigdata.btree.IIndex; /** |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:32
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/isolation Modified Files: IsolatableFusedView.java UnisolatedIndexSegment.java IsolatedBTree.java IIsolatableIndex.java ReadOnlyIsolatedIndex.java IValue.java UnisolatedBTree.java Value.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: IIsolatableIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IIsolatableIndex.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** IIsolatableIndex.java 11 Mar 2007 11:42:44 -0000 1.3 --- IIsolatableIndex.java 13 Apr 2007 15:04:23 -0000 1.4 *************** *** 48,61 **** package com.bigdata.isolation; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; import com.bigdata.journal.Name2Addr.ValueSerializer; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IBatchBTree; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentMerger; - import com.bigdata.objndx.Leaf; import com.bigdata.scaleup.IsolatablePartitionedIndexView; --- 48,61 ---- package com.bigdata.isolation; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IBatchBTree; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentMerger; + import com.bigdata.btree.Leaf; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; import com.bigdata.journal.Name2Addr.ValueSerializer; import com.bigdata.scaleup.IsolatablePartitionedIndexView; Index: UnisolatedIndexSegment.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/UnisolatedIndexSegment.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** UnisolatedIndexSegment.java 27 Mar 2007 14:34:23 -0000 1.3 --- UnisolatedIndexSegment.java 13 Apr 2007 15:04:23 -0000 1.4 *************** *** 48,59 **** package com.bigdata.isolation; import com.bigdata.isolation.UnisolatedBTree.DeletedEntryFilter; - import com.bigdata.objndx.AbstractBTree; - import com.bigdata.objndx.BatchContains; - import com.bigdata.objndx.BatchLookup; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentExtensionMetadata; - import com.bigdata.objndx.IndexSegmentFileStore; /** --- 48,59 ---- package com.bigdata.isolation; + import com.bigdata.btree.AbstractBTree; + import com.bigdata.btree.BatchContains; + import com.bigdata.btree.BatchLookup; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentExtensionMetadata; + import com.bigdata.btree.IndexSegmentFileStore; import com.bigdata.isolation.UnisolatedBTree.DeletedEntryFilter; /** Index: ReadOnlyIsolatedIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/ReadOnlyIsolatedIndex.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ReadOnlyIsolatedIndex.java 28 Feb 2007 13:59:09 -0000 1.1 --- ReadOnlyIsolatedIndex.java 13 Apr 2007 15:04:23 -0000 1.2 *************** *** 48,52 **** package com.bigdata.isolation; ! import com.bigdata.objndx.ReadOnlyIndex; /** --- 48,52 ---- package com.bigdata.isolation; ! import com.bigdata.btree.ReadOnlyIndex; /** Index: Value.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/Value.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** Value.java 12 Apr 2007 23:59:34 -0000 1.4 --- Value.java 13 Apr 2007 15:04:23 -0000 1.5 *************** *** 51,56 **** import org.CognitiveWeb.extser.ShortPacker; ! import com.bigdata.objndx.DataOutputBuffer; ! import com.bigdata.objndx.IValueSerializer; /** --- 51,56 ---- import org.CognitiveWeb.extser.ShortPacker; ! import com.bigdata.btree.DataOutputBuffer; ! import com.bigdata.btree.IValueSerializer; /** Index: IsolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IsolatedBTree.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** IsolatedBTree.java 27 Mar 2007 14:34:23 -0000 1.10 --- IsolatedBTree.java 13 Apr 2007 15:04:23 -0000 1.11 *************** *** 48,60 **** package com.bigdata.isolation; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BTreeMetadata; - import com.bigdata.objndx.ByteArrayValueSerializer; - import com.bigdata.objndx.IBatchBTree; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.ILinearList; - import com.bigdata.objndx.ISimpleBTree; import com.bigdata.rawstore.IRawStore; --- 48,60 ---- package com.bigdata.isolation; + import com.bigdata.btree.BTree; + import com.bigdata.btree.BTreeMetadata; + import com.bigdata.btree.ByteArrayValueSerializer; + import com.bigdata.btree.IBatchBTree; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.ILinearList; + import com.bigdata.btree.ISimpleBTree; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; import com.bigdata.rawstore.IRawStore; Index: UnisolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/UnisolatedBTree.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** UnisolatedBTree.java 27 Mar 2007 14:34:23 -0000 1.10 --- UnisolatedBTree.java 13 Apr 2007 15:04:23 -0000 1.11 *************** *** 56,70 **** import org.CognitiveWeb.extser.LongPacker; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.BTreeMetadata; ! import com.bigdata.objndx.BatchContains; ! import com.bigdata.objndx.BatchInsert; ! import com.bigdata.objndx.BatchLookup; ! import com.bigdata.objndx.BatchRemove; ! import com.bigdata.objndx.IBatchOp; ! import com.bigdata.objndx.IEntryIterator; ! import com.bigdata.objndx.ISimpleBTree; ! import com.bigdata.objndx.IndexSegment; ! import com.bigdata.objndx.EntryIterator.EntryFilter; import com.bigdata.rawstore.IRawStore; --- 56,70 ---- import org.CognitiveWeb.extser.LongPacker; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.BTreeMetadata; ! import com.bigdata.btree.BatchContains; ! import com.bigdata.btree.BatchInsert; ! import com.bigdata.btree.BatchLookup; ! import com.bigdata.btree.BatchRemove; ! import com.bigdata.btree.IBatchOp; ! import com.bigdata.btree.IEntryIterator; ! import com.bigdata.btree.ISimpleBTree; ! import com.bigdata.btree.IndexSegment; ! import com.bigdata.btree.EntryIterator.EntryFilter; import com.bigdata.rawstore.IRawStore; Index: IsolatableFusedView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IsolatableFusedView.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** IsolatableFusedView.java 29 Mar 2007 17:01:34 -0000 1.5 --- IsolatableFusedView.java 13 Apr 2007 15:04:23 -0000 1.6 *************** *** 48,55 **** package com.bigdata.isolation; ! import com.bigdata.objndx.AbstractBTree; ! import com.bigdata.objndx.IBatchBTree; ! import com.bigdata.objndx.IEntryIterator; ! import com.bigdata.objndx.ReadOnlyFusedView; import com.bigdata.scaleup.PartitionedIndexView; --- 48,55 ---- package com.bigdata.isolation; ! import com.bigdata.btree.AbstractBTree; ! import com.bigdata.btree.IBatchBTree; ! import com.bigdata.btree.IEntryIterator; ! import com.bigdata.btree.ReadOnlyFusedView; import com.bigdata.scaleup.PartitionedIndexView; Index: IValue.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IValue.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** IValue.java 17 Feb 2007 21:34:21 -0000 1.2 --- IValue.java 13 Apr 2007 15:04:23 -0000 1.3 *************** *** 44,48 **** package com.bigdata.isolation; ! import com.bigdata.objndx.AbstractBTree; /** --- 44,48 ---- package com.bigdata.isolation; ! import com.bigdata.btree.AbstractBTree; /** |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:29
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/test/com/bigdata/journal Modified Files: TestCommitHistory.java AbstractTestTxRunState.java TestTx.java TestReadCommittedTx.java TestNamedIndices.java TestCommitRecordIndex.java AbstractBufferStrategyTestCase.java AbstractBTreeWithJournalTestCase.java BenchmarkJournalWriteRate.java TestReadOnlyTx.java StressTestConcurrent.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: AbstractBTreeWithJournalTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractBTreeWithJournalTestCase.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** AbstractBTreeWithJournalTestCase.java 27 Mar 2007 14:34:20 -0000 1.4 --- AbstractBTreeWithJournalTestCase.java 13 Apr 2007 15:04:23 -0000 1.5 *************** *** 51,57 **** import java.util.UUID; ! import com.bigdata.objndx.AbstractBTreeTestCase; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.SimpleEntry; /** --- 51,57 ---- import java.util.UUID; ! import com.bigdata.btree.AbstractBTreeTestCase; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.SimpleEntry; /** Index: TestCommitHistory.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestCommitHistory.java 12 Mar 2007 18:06:12 -0000 1.3 --- TestCommitHistory.java 13 Apr 2007 15:04:22 -0000 1.4 *************** *** 50,54 **** import java.nio.ByteBuffer; ! import com.bigdata.objndx.BTree; /** --- 50,54 ---- import java.nio.ByteBuffer; ! import com.bigdata.btree.BTree; /** Index: TestNamedIndices.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestNamedIndices.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TestNamedIndices.java 27 Mar 2007 14:34:20 -0000 1.4 --- TestNamedIndices.java 13 Apr 2007 15:04:22 -0000 1.5 *************** *** 50,55 **** import java.util.UUID; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.SimpleEntry; import com.bigdata.scaleup.MasterJournal; --- 50,55 ---- import java.util.UUID; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.SimpleEntry; import com.bigdata.scaleup.MasterJournal; Index: AbstractBufferStrategyTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractBufferStrategyTestCase.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** AbstractBufferStrategyTestCase.java 22 Mar 2007 21:11:24 -0000 1.5 --- AbstractBufferStrategyTestCase.java 13 Apr 2007 15:04:22 -0000 1.6 *************** *** 53,57 **** import java.util.Random; ! import com.bigdata.objndx.IndexSegmentBuilder; import com.bigdata.rawstore.AbstractRawStoreTestCase; import com.bigdata.rawstore.Bytes; --- 53,57 ---- import java.util.Random; ! import com.bigdata.btree.IndexSegmentBuilder; import com.bigdata.rawstore.AbstractRawStoreTestCase; import com.bigdata.rawstore.Bytes; Index: TestTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTx.java,v retrieving revision 1.22 retrieving revision 1.23 diff -C2 -d -r1.22 -r1.23 *** TestTx.java 27 Mar 2007 14:34:21 -0000 1.22 --- TestTx.java 13 Apr 2007 15:04:22 -0000 1.23 *************** *** 50,56 **** import java.util.UUID; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IIndex; /** --- 50,56 ---- import java.util.UUID; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.isolation.UnisolatedBTree; /** Index: StressTestConcurrent.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/StressTestConcurrent.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** StressTestConcurrent.java 27 Mar 2007 14:34:21 -0000 1.11 --- StressTestConcurrent.java 13 Apr 2007 15:04:23 -0000 1.12 *************** *** 63,69 **** import java.util.concurrent.TimeUnit; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.ComparisonTestDriver.IComparisonTest; - import com.bigdata.objndx.IIndex; import com.bigdata.util.concurrent.DaemonThreadFactory; --- 63,69 ---- import java.util.concurrent.TimeUnit; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.journal.ComparisonTestDriver.IComparisonTest; import com.bigdata.util.concurrent.DaemonThreadFactory; Index: TestCommitRecordIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestCommitRecordIndex.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TestCommitRecordIndex.java 11 Mar 2007 11:42:34 -0000 1.2 --- TestCommitRecordIndex.java 13 Apr 2007 15:04:22 -0000 1.3 *************** *** 50,54 **** import java.nio.ByteBuffer; ! import com.bigdata.objndx.BTree; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; --- 50,54 ---- import java.nio.ByteBuffer; ! import com.bigdata.btree.BTree; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; Index: BenchmarkJournalWriteRate.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/BenchmarkJournalWriteRate.java,v retrieving revision 1.17 retrieving revision 1.18 diff -C2 -d -r1.17 -r1.18 *** BenchmarkJournalWriteRate.java 10 Apr 2007 18:33:31 -0000 1.17 --- BenchmarkJournalWriteRate.java 13 Apr 2007 15:04:23 -0000 1.18 *************** *** 60,69 **** import junit.framework.TestSuite; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.ByteArrayValueSerializer; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.rawstore.IRawStore; --- 60,69 ---- import junit.framework.TestSuite; + import com.bigdata.btree.BTree; + import com.bigdata.btree.ByteArrayValueSerializer; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.KeyBuilder; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.rawstore.Bytes; import com.bigdata.rawstore.IRawStore; Index: TestReadCommittedTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestReadCommittedTx.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestReadCommittedTx.java 27 Mar 2007 14:34:20 -0000 1.3 --- TestReadCommittedTx.java 13 Apr 2007 15:04:22 -0000 1.4 *************** *** 50,55 **** import java.util.UUID; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IIndex; /** --- 50,55 ---- import java.util.UUID; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.UnisolatedBTree; /** Index: AbstractTestTxRunState.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractTestTxRunState.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** AbstractTestTxRunState.java 27 Mar 2007 14:34:20 -0000 1.3 --- AbstractTestTxRunState.java 13 Apr 2007 15:04:22 -0000 1.4 *************** *** 53,58 **** import junit.framework.TestSuite; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IIndex; /** --- 53,58 ---- import junit.framework.TestSuite; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.UnisolatedBTree; /** Index: TestReadOnlyTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestReadOnlyTx.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** TestReadOnlyTx.java 27 Mar 2007 14:34:21 -0000 1.6 --- TestReadOnlyTx.java 13 Apr 2007 15:04:23 -0000 1.7 *************** *** 50,55 **** import java.util.UUID; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IIndex; /** --- 50,55 ---- import java.util.UUID; + import com.bigdata.btree.IIndex; import com.bigdata.isolation.UnisolatedBTree; /** |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:29
|
Update of /cvsroot/cweb/bigdata In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284 Added Files: com_bigdata_btree_BytesUtil_UnsignedByteArrayComparator.h com_bigdata_btree_BytesUtil.h Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. --- NEW FILE: com_bigdata_btree_BytesUtil.h --- /* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class com_bigdata_btree_BytesUtil */ #ifndef _Included_com_bigdata_btree_BytesUtil #define _Included_com_bigdata_btree_BytesUtil #ifdef __cplusplus extern "C" { #endif #undef com_bigdata_btree_BytesUtil_minlen #define com_bigdata_btree_BytesUtil_minlen 100L /* * Class: com_bigdata_btree_BytesUtil * Method: _compareBytes * Signature: (I[BI[B)I */ JNIEXPORT jint JNICALL Java_com_bigdata_btree_BytesUtil__1compareBytes (JNIEnv *, jclass, jint, jbyteArray, jint, jbyteArray); /* * Class: com_bigdata_btree_BytesUtil * Method: _compareBytesWithOffsetAndLen * Signature: (II[BII[B)I */ JNIEXPORT jint JNICALL Java_com_bigdata_btree_BytesUtil__1compareBytesWithOffsetAndLen (JNIEnv *, jclass, jint, jint, jbyteArray, jint, jint, jbyteArray); #ifdef __cplusplus } #endif #endif --- NEW FILE: com_bigdata_btree_BytesUtil_UnsignedByteArrayComparator.h --- /* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class com_bigdata_btree_BytesUtil_UnsignedByteArrayComparator */ #ifndef _Included_com_bigdata_btree_BytesUtil_UnsignedByteArrayComparator #define _Included_com_bigdata_btree_BytesUtil_UnsignedByteArrayComparator #ifdef __cplusplus extern "C" { #endif #ifdef __cplusplus } #endif #endif |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:29
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/test/com/bigdata Modified Files: TestAll.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: TestAll.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/TestAll.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TestAll.java 22 Mar 2007 15:04:16 -0000 1.4 --- TestAll.java 13 Apr 2007 15:04:24 -0000 1.5 *************** *** 85,89 **** suite.addTest( com.bigdata.util.TestAll.suite() ); suite.addTest( com.bigdata.rawstore.TestAll.suite() ); ! suite.addTest( com.bigdata.objndx.TestAll.suite() ); suite.addTest( com.bigdata.isolation.TestAll.suite() ); suite.addTest( com.bigdata.journal.TestAll.suite() ); --- 85,89 ---- suite.addTest( com.bigdata.util.TestAll.suite() ); suite.addTest( com.bigdata.rawstore.TestAll.suite() ); ! suite.addTest( com.bigdata.btree.TestAll.suite() ); suite.addTest( com.bigdata.isolation.TestAll.suite() ); suite.addTest( com.bigdata.journal.TestAll.suite() ); |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:29
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/rawstore Modified Files: IRawStore.java Addr.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: Addr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore/Addr.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** Addr.java 27 Mar 2007 14:34:21 -0000 1.3 --- Addr.java 13 Apr 2007 15:04:24 -0000 1.4 *************** *** 50,54 **** import org.CognitiveWeb.extser.LongPacker; ! import com.bigdata.objndx.IndexSegmentBuilder; /** --- 50,54 ---- import org.CognitiveWeb.extser.LongPacker; ! import com.bigdata.btree.IndexSegmentBuilder; /** Index: IRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore/IRawStore.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** IRawStore.java 6 Mar 2007 20:38:05 -0000 1.6 --- IRawStore.java 13 Apr 2007 15:04:24 -0000 1.7 *************** *** 51,56 **** import java.nio.ByteBuffer; import com.bigdata.journal.Journal; - import com.bigdata.objndx.BTree; /** --- 51,56 ---- import java.nio.ByteBuffer; + import com.bigdata.btree.BTree; import com.bigdata.journal.Journal; /** |
From: Bryan T. <tho...@us...> - 2007-04-13 15:04:27
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/test/com/bigdata/isolation Modified Files: TestValueSerializer.java TestIsolatedBTree.java TestUnisolatedBTree.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: TestUnisolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/isolation/TestUnisolatedBTree.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TestUnisolatedBTree.java 27 Mar 2007 14:34:24 -0000 1.7 --- TestUnisolatedBTree.java 13 Apr 2007 15:04:23 -0000 1.8 *************** *** 50,58 **** import java.util.UUID; ! import com.bigdata.objndx.AbstractBTree; ! import com.bigdata.objndx.AbstractBTreeTestCase; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.IBatchOp; ! import com.bigdata.objndx.IRangeQuery; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; --- 50,58 ---- import java.util.UUID; ! import com.bigdata.btree.AbstractBTree; ! import com.bigdata.btree.AbstractBTreeTestCase; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.IBatchOp; ! import com.bigdata.btree.IRangeQuery; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; *************** *** 371,375 **** * @todo test the batch apis. all methods must work with {@link Value}s * (the test for this could be a test of the ! * {@link IBatchOp#apply(com.bigdata.objndx.ISimpleBTree)} * implementations in the btree package since we apply that method in * a trivial manner to support the batch api. --- 371,375 ---- * @todo test the batch apis. all methods must work with {@link Value}s * (the test for this could be a test of the ! * {@link IBatchOp#apply(com.bigdata.btree.ISimpleBTree)} * implementations in the btree package since we apply that method in * a trivial manner to support the batch api. Index: TestIsolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/isolation/TestIsolatedBTree.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** TestIsolatedBTree.java 27 Mar 2007 14:34:24 -0000 1.6 --- TestIsolatedBTree.java 13 Apr 2007 15:04:23 -0000 1.7 *************** *** 50,57 **** import java.util.UUID; import com.bigdata.journal.TestTx; - import com.bigdata.objndx.AbstractBTreeTestCase; - import com.bigdata.objndx.BTreeMetadata; - import com.bigdata.objndx.IBatchOp; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; --- 50,57 ---- import java.util.UUID; + import com.bigdata.btree.AbstractBTreeTestCase; + import com.bigdata.btree.BTreeMetadata; + import com.bigdata.btree.IBatchOp; import com.bigdata.journal.TestTx; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; *************** *** 75,79 **** * @todo test the batch apis. all methods must work with {@link Value}s (the * test for this could be a test of the ! * {@link IBatchOp#apply(com.bigdata.objndx.ISimpleBTree)} implementations * in the btree package since we apply that method in a trivial manner to * support the batch api. --- 75,79 ---- * @todo test the batch apis. all methods must work with {@link Value}s (the * test for this could be a test of the ! * {@link IBatchOp#apply(com.bigdata.btree.ISimpleBTree)} implementations * in the btree package since we apply that method in a trivial manner to * support the batch api. Index: TestValueSerializer.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/isolation/TestValueSerializer.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestValueSerializer.java 12 Apr 2007 23:59:35 -0000 1.3 --- TestValueSerializer.java 13 Apr 2007 15:04:23 -0000 1.4 *************** *** 56,60 **** import junit.framework.TestCase2; ! import com.bigdata.objndx.DataOutputBuffer; /** --- 56,60 ---- import junit.framework.TestCase2; ! import com.bigdata.btree.DataOutputBuffer; /** |
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/test/com/bigdata/objndx Removed Files: TestIndexSegmentBuilderWithSmallTree.java TestInsertLookupRemoveOnRootLeafWithBatchApi.java TestUtilMethods.java TestIncrementalWrite.java TestImmutableKeyBuffer.java TestInsertLookupRemoveKeysInRootLeaf.java SimpleEntry.java TestDataOutputBuffer.java TestLongPacker.java TestAbstractKeyBuffer.java TestKeyBufferSearch.java TestLinearListMethods.java TestIndexSegmentBuilderWithLargeTrees.java TestIndexSegmentFastLeafScan.java TestByteArrayValueSerializer.java TestRestartSafe.java TestBTree.java TestKeyBufferSerializer.java TestBytesUtil.java TestShortPacker.java NoEvictionListener.java TestNodeSerializer.java TestSplitJoinThreeLevels.java TestUserDefinedFunction.java TestSuccessorUtil.java TestMutableKeyBuffer.java TestBloomFilter.java TestSplitJoinRootLeaf.java TestReopen.java AbstractBTreeTestCase.java TestCopyOnWrite.java TestFusedView.java TestIndexSegmentMerger.java TestSplitRootLeaf.java TestKeyBuilder.java TestRecordCompressor.java TestIndexSegmentWithBloomFilter.java TestInvariants.java MyEvictionListener.java MyHardReferenceQueue.java TestFindChild.java TestIndexSegmentPlan.java TestAll.java TestIndexSegmentAddressSerializer.java TestTouch.java TestIterators.java TestDirtyIterators.java TestCommit.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. --- TestIndexSegmentBuilderWithLargeTrees.java DELETED --- --- TestBTree.java DELETED --- --- TestIndexSegmentPlan.java DELETED --- --- TestKeyBufferSearch.java DELETED --- --- TestKeyBufferSerializer.java DELETED --- --- TestCommit.java DELETED --- --- TestSplitJoinRootLeaf.java DELETED --- --- TestLongPacker.java DELETED --- --- TestInvariants.java DELETED --- --- TestIndexSegmentWithBloomFilter.java DELETED --- --- MyHardReferenceQueue.java DELETED --- --- TestNodeSerializer.java DELETED --- --- TestRecordCompressor.java DELETED --- --- TestImmutableKeyBuffer.java DELETED --- --- TestLinearListMethods.java DELETED --- --- TestUserDefinedFunction.java DELETED --- --- TestMutableKeyBuffer.java DELETED --- --- TestFusedView.java DELETED --- --- TestSplitJoinThreeLevels.java DELETED --- --- AbstractBTreeTestCase.java DELETED --- --- TestIndexSegmentFastLeafScan.java DELETED --- --- MyEvictionListener.java DELETED --- --- TestAbstractKeyBuffer.java DELETED --- --- TestSplitRootLeaf.java DELETED --- --- TestKeyBuilder.java DELETED --- --- TestReopen.java DELETED --- --- TestDataOutputBuffer.java DELETED --- --- TestTouch.java DELETED --- --- TestInsertLookupRemoveKeysInRootLeaf.java DELETED --- --- TestFindChild.java DELETED --- --- NoEvictionListener.java DELETED --- --- TestRestartSafe.java DELETED --- --- TestIndexSegmentMerger.java DELETED --- --- TestByteArrayValueSerializer.java DELETED --- --- TestSuccessorUtil.java DELETED --- --- TestIterators.java DELETED --- --- TestUtilMethods.java DELETED --- --- TestInsertLookupRemoveOnRootLeafWithBatchApi.java DELETED --- --- TestDirtyIterators.java DELETED --- --- TestIndexSegmentBuilderWithSmallTree.java DELETED --- --- TestIndexSegmentAddressSerializer.java DELETED --- --- SimpleEntry.java DELETED --- --- TestAll.java DELETED --- --- TestCopyOnWrite.java DELETED --- --- TestBloomFilter.java DELETED --- --- TestIncrementalWrite.java DELETED --- --- TestShortPacker.java DELETED --- --- TestBytesUtil.java DELETED --- |
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/btree In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/test/com/bigdata/btree Added Files: TestIndexSegmentBuilderWithSmallTree.java TestFindChild.java TestIndexSegmentBuilderWithLargeTrees.java TestSplitJoinThreeLevels.java MyEvictionListener.java TestIndexSegmentMerger.java NoEvictionListener.java TestKeyBufferSerializer.java TestFusedView.java TestLongPacker.java TestImmutableKeyBuffer.java TestRestartSafe.java TestDirtyIterators.java TestIncrementalWrite.java TestKeyBuilder.java TestMutableKeyBuffer.java TestInvariants.java TestReopen.java SimpleEntry.java TestIndexSegmentPlan.java TestSuccessorUtil.java TestRecordCompressor.java TestNodeSerializer.java TestTouch.java TestSplitJoinRootLeaf.java TestSplitRootLeaf.java MyHardReferenceQueue.java TestBloomFilter.java TestKeyBufferSearch.java TestIterators.java TestIndexSegmentWithBloomFilter.java TestByteArrayValueSerializer.java TestShortPacker.java TestLinearListMethods.java TestAbstractKeyBuffer.java TestIndexSegmentFastLeafScan.java TestIndexSegmentAddressSerializer.java TestCommit.java TestInsertLookupRemoveOnRootLeafWithBatchApi.java TestAll.java TestInsertLookupRemoveKeysInRootLeaf.java TestUserDefinedFunction.java TestUtilMethods.java AbstractBTreeTestCase.java TestCopyOnWrite.java TestBTree.java TestBytesUtil.java TestDataOutputBuffer.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. --- NEW FILE: TestIndexSegmentBuilderWithLargeTrees.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Dec 21, 2006 */ package com.bigdata.btree; import java.io.File; import java.io.IOException; import java.util.Properties; import java.util.UUID; import com.bigdata.btree.BTree; import com.bigdata.btree.IndexSegment; import com.bigdata.btree.IndexSegmentBuilder; import com.bigdata.btree.IndexSegmentFileStore; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; import com.bigdata.journal.Options; /** * Test build trees on the journal, evicts them into an {@link IndexSegment}, * and then compares the trees for the same total ordering. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class TestIndexSegmentBuilderWithLargeTrees extends AbstractBTreeTestCase { public TestIndexSegmentBuilderWithLargeTrees() { } public TestIndexSegmentBuilderWithLargeTrees(String name) { super(name); } public Properties getProperties() { if (properties == null) { properties = super.getProperties(); properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct .toString()); properties.setProperty(Options.CREATE_TEMP_FILE, "true"); } return properties; } private Properties properties; /** * Return a btree backed by a journal with the indicated branching factor. * The serializer requires that values in leaves are {@link SimpleEntry} * objects. * * @param branchingFactor * The branching factor. * * @return The btree. */ public BTree getBTree(int branchingFactor) { Journal journal = new Journal(getProperties()); BTree btree = new BTree(journal, branchingFactor, UUID.randomUUID(), SimpleEntry.Serializer.INSTANCE); return btree; } // /** // * Test exercises a known problem case. // */ // public void test_buildOrder10n3() throws IOException { // // int[] keys = new int[]{1,3,5}; // SimpleEntry[] vals = new SimpleEntry[] { // new SimpleEntry(1), // new SimpleEntry(3), // new SimpleEntry(5), // }; // // BTree btree = getBTree(3); // // for(int i=0;i<keys.length; i++) { // btree.insert(keys[i], vals[i]); // } // // File outFile = new File(getName()); // // if(outFile.exists() && !outFile.delete()) { // fail("Could not delete file: "+outFile); // } // // new IndexSegmentBuilder(outFile,outFile.getAbsoluteFile().getParentFile(),btree,10); // // /* // * Verify can load the index file and that the metadata // * associated with the index file is correct (we are only // * checking those aspects that are easily defined by the test // * case and not, for example, those aspects that depend on the // * specifics of the length of serialized nodes or leaves). // */ // final IndexSegment seg = new IndexSegment(new IndexSegmentFileStore(outFile), // // setup reference queue to hold all leaves and nodes. // new HardReferenceQueue<PO>(new DefaultEvictionListener(), // 1, 1), // // take the other parameters from the btree. // btree.NEGINF, btree.comparator, // btree.nodeSer.keySerializer, btree.nodeSer.valueSerializer); // TestIndexSegmentPlan.assertEquals(10,seg.getBranchingFactor()); // TestIndexSegmentPlan.assertEquals(0,seg.getHeight()); // TestIndexSegmentPlan.assertEquals(ArrayType.INT,seg.getKeyType()); // TestIndexSegmentPlan.assertEquals(1,seg.getLeafCount()); // TestIndexSegmentPlan.assertEquals(0,seg.getNodeCount()); // TestIndexSegmentPlan.assertEquals(3,seg.size()); // // /* // * Verify the total index order. // */ // assertSameBTree(btree, seg); // // } /** * Branching factors for the source btree that is then used to build an * {@link IndexSegment}. This parameter indirectly determines both the #of * leaves and the #of entries in the source btree. * * Note: Regardless of the branching factor in the source btree, the same * {@link IndexSegment} should be build for a given set of entries * (key-value pairs) and a given output branching factor for the * {@link IndexSegment}. However, input trees of different heights also * stress different parts of the algorithm. */ final int[] branchingFactors = new int[]{3,4,5,10,20};//64};//128};//,512}; /** * A stress test for building {@link IndexSegment}s. A variety of * {@link BTree}s are built from dense random keys in [1:n] using a variety * of branching factors. For each {@link BTree}, a variety of * {@link IndexSegment}s are built using a variety of output branching * factors. For each {@link IndexSegment}, we then compare it against its * source {@link BTree} for the same total ordering. */ public void test_randomDenseKeys() throws IOException { for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m ) ); doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m ) ); doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m ) ); // Note: overflows the initial journal extent. // doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m*m ) ); } } /** * A stress test for building {@link IndexSegment}s. A variety of * {@link BTree}s are built from spase random keys using a variety of * branching factors. For each {@link BTree}, a variety of * {@link IndexSegment}s are built using a variety of output branching * factors. For each {@link IndexSegment}, we then compare it against its * source {@link BTree} for the same total ordering. */ public void test_randomSparseKeys() throws IOException { int trace = 0; for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m,trace)); doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m,trace) ); doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m*m,trace) ); // Note: overflows the initial journal extent. // doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m*m*m,trace) ); } } /** * Test helper builds an index segment from the btree using several * different branching factors and each time compares the resulting total * ordering to the original btree. * * @param btree The source btree. */ public void doBuildIndexSegmentAndCompare(BTree btree) throws IOException { // branching factors used for the index segment. final int branchingFactors[] = new int[]{3,4,5,10,20,60,100,256,1024,4096,8192}; for( int i=0; i<branchingFactors.length; i++ ) { int m = branchingFactors[i]; final File outFile = new File(getName()+"_m"+m+ ".seg"); if( outFile.exists() && ! outFile.delete() ) { fail("Could not delete old index segment: "+outFile.getAbsoluteFile()); } final File tmpDir = outFile.getAbsoluteFile().getParentFile(); /* * Build the index segment. */ System.err.println("Building index segment: in(m=" + btree.getBranchingFactor() + ", nentries=" + btree.getEntryCount() + "), out(m=" + m + ")"); new IndexSegmentBuilder(outFile, tmpDir, btree, m, 0.); /* * Verify can load the index file and that the metadata associated * with the index file is correct (we are only checking those * aspects that are easily defined by the test case and not, for * example, those aspects that depend on the specifics of the length * of serialized nodes or leaves). */ System.err.println("Opening index segment."); final IndexSegment seg = new IndexSegmentFileStore(outFile).load(); /* * Verify the total index order. */ System.err.println("Verifying index segment."); assertSameBTree(btree, seg); System.err.println("Closing index segment."); seg.close(); /* * Note: close() is a reversable operation. This verifies that by * immediately re-verifying the index segment. The index segment * SHOULD be transparently re-opened for this operation. */ System.err.println("Re-verifying index segment."); assertSameBTree(btree, seg); // Close again so that we can delete the backing file. System.err.println("Re-closing index segment."); seg.close(); if (!outFile.delete()) { log.warn("Could not delete index segment: " + outFile); } } // build index segment with the next branching factor. /* * Closing the journal. */ System.err.println("Closing journal."); btree.getStore().closeAndDelete(); } } --- NEW FILE: TestBTree.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Nov 8, 2006 */ package com.bigdata.btree; /** * Stress tests for basic tree operations (insert, lookup, and remove) without * causing node or leaf evictions (IO is disabled). * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class TestBTree extends AbstractBTreeTestCase { /** * */ public TestBTree() { } /** * @param arg0 */ public TestBTree(String arg0) { super(arg0); } /* * test helpers. */ /* * Test structure modification. */ /** * Stress test for building up a tree and then removing all keys in a random * order. */ public void test_stress_removeStructure() { int nkeys = 500; doRemoveStructureStressTest(3,nkeys); doRemoveStructureStressTest(4,nkeys); doRemoveStructureStressTest(5,nkeys); } /** * Stress test of insert, removal and lookup of keys in the tree (allows * splitting of the root leaf). * * Note: The #of inserts is limited by the size of the leaf hard reference * queue since evictions are disabled for the tests in this file. We can not * know in advance how many touches will result and when leaf evictions will * begin, so ntrials is set heuristically. */ public void test_insertLookupRemoveKeyTreeStressTest() { int ntrials = 1000; doInsertLookupRemoveStressTest(3, 1000, ntrials); doInsertLookupRemoveStressTest(4, 1000, ntrials); doInsertLookupRemoveStressTest(5, 1000, ntrials); doInsertLookupRemoveStressTest(16, 10000, ntrials); } /** * Note: This error was actually a fence post in * {@link Node#dump(java.io.PrintStream, int, boolean))}. That method was * incorrectly reporting an error when nkeys was zero after a split of a * node. */ public void test_errorSequence001() { int m = 3; int[] order = new int[] { 0, 1, 6, 3, 7, 2, 4, 5, 8 }; doKnownKeySequenceTest( m, order, 3 ); } /** * A stress test for sequential key insertion that runs with a variety of * branching factors and #of keys to insert. */ public void test_splitRootLeaf_increasingKeySequence() { int[] branchingFactors = new int[]{3,4,5};// 6,7,8,20,55,79,256,512,1024,4097}; for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitWithIncreasingKeySequence( getBTree(m), m, m ); doSplitWithIncreasingKeySequence( getBTree(m), m, m*m ); doSplitWithIncreasingKeySequence( getBTree(m), m, m*m*m ); doSplitWithIncreasingKeySequence( getBTree(m), m, m*m*m*m ); } } /** * A stress test for sequential decreasing key insertions that runs with a * variety of branching factors and #of keys to insert. */ public void test_splitRootLeaf_decreasingKeySequence() { int[] branchingFactors = new int[]{3,4,5};// 6,7,8,20,55,79,256,512,1024,4097}; for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitWithDecreasingKeySequence( getBTree(m), m, m ); doSplitWithDecreasingKeySequence( getBTree(m), m, m*m ); doSplitWithDecreasingKeySequence( getBTree(m), m, m*m*m ); doSplitWithDecreasingKeySequence( getBTree(m), m, m*m*m*m ); } } /** * Stress test inserts random permutations of keys into btrees of order m * for several different btrees, #of keys to be inserted, and permutations * of keys. */ public void test_stress_split() { doSplitTest( 3, 0 ); doSplitTest( 4, 0 ); doSplitTest( 5, 0 ); } /** * A stress test for random key insertion using a that runs with a variety * of branching factors and #of keys to insert. */ public void test_splitRootLeaf_randomKeySequence() { int[] branchingFactors = new int[]{3,4,5};// 6,7,8,20,55,79,256,512,1024,4097}; for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitWithRandomDenseKeySequence( getBTree(m), m, m ); doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m ); doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m ); doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m*m ); } } } --- NEW FILE: TestIndexSegmentPlan.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Dec 14, 2006 */ package com.bigdata.btree; import com.bigdata.btree.BTree; import com.bigdata.btree.IndexSegment; import com.bigdata.btree.IndexSegmentBuilder; import com.bigdata.btree.IndexSegmentPlan; /** * Test suite for efficient post-order rebuild of an index in an external index * segment. * * @todo verify post-conditions for files (temp file is deleted, perhaps the * index segment is read only). * * @todo try building large indices, exporting them into index segments, and * then verifying that the index segments have the correct data. We can * run a variety of index stress tests to build the index, sweep in data * from the file system, etc., and then generate the corresponding index * segment and validate it against the in memory {@link BTree}. * * @todo The notion of merging multiple index segments requires a notion of * which index segments are more recent or alternatively which values are * more recent so that we can reconcile values for the same key. this is * linked to how we will handle transactional isolation. * * @todo Handle "delete" markers. For full transactional isolation we need to * keep delete markers around until there are no more live transactions * that could read the index entry. This suggests that we probably want to * use the transaction timestamp rather than a version counter. Consider * that a read by tx1 needs to check the index on the journal and then * each index segment in turn in reverse historical order until an entry * (potentially a delete marker) is found that is equal to or less than * the timestamp of the committed state from which tx1 was born. This * means that an index miss must test the journal and all live segments * for that index (hence the use of bloom filters to filter out index * misses). It also suggests that we should keep the timestamp as part of * the key, except in the ground state index on the journal where the * timestamp is the timestamp of the last commit of the journal. This * probably will also address VLR TX that would span a freeze of the * journal. We expunge the isolated index into a segment and do a merge * when the transaction finally commits. We wind up doing the same * validation and merge steps as when the isolation occurs within a more * limited buffer, but in more of a batch fashion. This might work nicely * if we buffer the isolatation index out to a certain size in memory and * then start to spill it onto the journal. If fact, the hard reference * queue already does this so we can just test to see if (a) anything has * been written out from the isolation index; and (b) whether or not the * journal was frozen since the isolation index was created. * * Should the merge down should impose the transaction commit timestamp on the * items in the index? * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class TestIndexSegmentPlan extends AbstractBTreeTestCase { /** * */ public TestIndexSegmentPlan() { } /** * @param name */ public TestIndexSegmentPlan(String name) { super(name); } /** * Test {@link IndexSegmentBuilder#getMinimumHeight(int, int)}. This * routine is responsible for determining the minimum height of a tree given * a branching factor and a #of index entries. */ public void test_minimumHeight() { assertEquals( 0, IndexSegmentPlan.getMinimumHeight(3, 1)); assertEquals( 1, IndexSegmentPlan.getMinimumHeight(3, 2)); assertEquals( 1, IndexSegmentPlan.getMinimumHeight(3, 3)); assertEquals( 2, IndexSegmentPlan.getMinimumHeight(3, 4)); assertEquals( 2, IndexSegmentPlan.getMinimumHeight(3, 5)); assertEquals( 2, IndexSegmentPlan.getMinimumHeight(3, 6)); assertEquals( 2, IndexSegmentPlan.getMinimumHeight(3, 7)); assertEquals( 2, IndexSegmentPlan.getMinimumHeight(3, 8)); assertEquals( 2, IndexSegmentPlan.getMinimumHeight(3, 9)); assertEquals( 3, IndexSegmentPlan.getMinimumHeight(3, 10)); } /** * A series of tests for * {@link IndexSegmentBuilder#distributeKeys(int m, int m/2, int nleaves, int nentries)}. * * This routine is responsible for deciding how many index entries will go * into each leaf of the generated {@link IndexSegment}. In particular, it * compensates when there would be an underflow in the last leaf unless we * short some of the earlier leaves so that all leaves have at least their * minimum capacity. * * @see src/architecture/btree.xls, which has the examples from which these * tests are derived. */ public void test_distributeKeys_m3() { assertEquals(new int[] { 3, 3, 2, 2 }, IndexSegmentPlan.distributeKeys(3, (3 + 1) / 2, 4, 10)); } public void test_distributeKeys_m4() { assertEquals(new int[] { 4, 4, 2 }, IndexSegmentPlan.distributeKeys( 4, (4 + 1) / 2, 3, 10)); } public void test_distributeKeys_m5() { assertEquals(new int[] { 5,5 }, IndexSegmentPlan.distributeKeys( 5, (5+ 1) / 2, 2, 10)); } public void test_distributeKeys_m6() { assertEquals(new int[] { 6,4 }, IndexSegmentPlan.distributeKeys( 6, (6+ 1) / 2, 2, 10)); } public void test_distributeKeys_m7() { assertEquals(new int[] { 6,4 }, IndexSegmentPlan.distributeKeys( 7, (7+ 1) / 2, 2, 10)); } public void test_distributeKeys_m8() { assertEquals(new int[] { 6,4 }, IndexSegmentPlan.distributeKeys( 8, (8+ 1) / 2, 2, 10)); } public void test_distributeKeys_m9() { assertEquals(new int[] { 5, 5 }, IndexSegmentPlan.distributeKeys(9, (9 + 1) / 2, 2, 10)); } public void test_distributeKeys_m10() { assertEquals(new int[] { 10 }, IndexSegmentPlan.distributeKeys(10, (10 + 1) / 2, 1, 10)); } /** * Test where the root leaf has fewer than (m+1)/2 entries. The root is * never under capacity, so this tests that the function to distribute the * keys accepts a root leaf under these circumstances. */ public void test_distributeKeys_rootUnderCapacity() { assertEquals(new int[] { 3 }, IndexSegmentPlan.distributeKeys(10, (10 + 1) / 2, 1, 3)); } /* * */ /** * An application of the routine to distribute children among nodes - the * logic is identical to distributing keys among leaves except that the * result must be interpreted as the #of children NOT the #of keys. An alias * is provided to help clarify this distinction. * * @see IndexSegmentBuilder#distributeKeys(int, int, int, int) * @see IndexSegmentBuilder#distributeChildren(int, int, int, int) */ public void test_distributeChildren01() { assertEquals(new int[] { 2, 2, 2, 2, 2 }, IndexSegmentPlan .distributeKeys(3, (3 + 1) / 2, 5, 10)); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=3) and (n=10) entries. */ public void test_plan_m3_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(3,10); assertEquals("m",3,plan.m); assertEquals("(m+1/2)",2,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",4,plan.nleaves); assertEquals("nnodes",3,plan.nnodes); assertEquals("height",2,plan.height); assertEquals("numInLeaf[]",new int[]{3,3,2,2},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,2,4},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{2},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{2,2},plan.numInNode[1]); assertEquals("numInNode[2][]",new int[]{3,3,2,2},plan.numInNode[2]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=4) and (n=10) entries. */ public void test_plan_m4_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(4,10); assertEquals("m",4,plan.m); assertEquals("(m+1/2)",2,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",3,plan.nleaves); assertEquals("nnodes",1,plan.nnodes); assertEquals("height",1,plan.height); assertEquals("numInLeaf[]",new int[]{4,4,2},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,3},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{3},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{4,4,2},plan.numInNode[1]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=5) and (n=10) entries. */ public void test_plan_m5_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(5,10); assertEquals("m",5,plan.m); assertEquals("(m+1/2)",3,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",2,plan.nleaves); assertEquals("nnodes",1,plan.nnodes); assertEquals("height",1,plan.height); assertEquals("numInLeaf[]",new int[]{5,5},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,2},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{2},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{5,5},plan.numInNode[1]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=6) and (n=10) entries. */ public void test_plan_m6_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(6,10); assertEquals("m",6,plan.m); assertEquals("(m+1/2)",3,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",2,plan.nleaves); assertEquals("nnodes",1,plan.nnodes); assertEquals("height",1,plan.height); assertEquals("numInLeaf[]",new int[]{6,4},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,2},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{2},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{6,4},plan.numInNode[1]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=7) and (n=10) entries. */ public void test_plan_m7_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(7,10); assertEquals("m",7,plan.m); assertEquals("(m+1/2)",4,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",2,plan.nleaves); assertEquals("nnodes",1,plan.nnodes); assertEquals("height",1,plan.height); assertEquals("numInLeaf[]",new int[]{6,4},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,2},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{2},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{6,4},plan.numInNode[1]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=8) and (n=10) entries. */ public void test_plan_m8_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(8,10); assertEquals("m",8,plan.m); assertEquals("(m+1/2)",4,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",2,plan.nleaves); assertEquals("nnodes",1,plan.nnodes); assertEquals("height",1,plan.height); assertEquals("numInLeaf[]",new int[]{6,4},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,2},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{2},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{6,4},plan.numInNode[1]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=9) and (n=10) entries. */ public void test_plan_m9_n10() { IndexSegmentPlan plan = new IndexSegmentPlan(9,10); assertEquals("m",9,plan.m); assertEquals("(m+1/2)",5,plan.m2); assertEquals("nentries",10,plan.nentries); assertEquals("nleaves",2,plan.nleaves); assertEquals("nnodes",1,plan.nnodes); assertEquals("height",1,plan.height); assertEquals("numInLeaf[]",new int[]{5,5},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,2},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{2},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{5,5},plan.numInNode[1]); } /** * Tests {@link IndexSegmentPlan} for a tree with a branching factor of * (m=3) and (n=20) entries. */ public void test_plan_m3_n20() { IndexSegmentPlan plan = new IndexSegmentPlan(3,20); assertEquals("m",3,plan.m); assertEquals("(m+1/2)",2,plan.m2); assertEquals("nentries",20,plan.nentries); assertEquals("nleaves",7,plan.nleaves); assertEquals("nnodes",4,plan.nnodes); assertEquals("height",2,plan.height); assertEquals("numInLeaf[]",new int[]{3,3,3,3,3,3,2},plan.numInLeaf); assertEquals("numInLevel[]",new int[]{1,3,7},plan.numInLevel); assertEquals("numInNode[][]",plan.height+1,plan.numInNode.length); assertEquals("numInNode[0][]",new int[]{3},plan.numInNode[0]); assertEquals("numInNode[1][]",new int[]{3,2,2},plan.numInNode[1]); assertEquals("numInNode[2][]",new int[]{3,3,3,3,3,3,2},plan.numInNode[2]); } } --- NEW FILE: TestKeyBufferSearch.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Nov 12, 2006 */ package com.bigdata.btree; import com.bigdata.btree.AbstractKeyBuffer; import com.bigdata.btree.BytesUtil; import com.bigdata.btree.IKeyBuffer; import com.bigdata.btree.ImmutableKeyBuffer; import com.bigdata.btree.MutableKeyBuffer; import junit.framework.TestCase2; /** * Unit tests for {@link IKeyBuffer#search(int offset, byte[] searchKey)}. * * @todo write performance test? the existing code can no longer be used since * both linear and binary searches first test the shared prefix for the keys and * then search on the remainder... perhaps it could be used by elevating the * method to test the prefix into the performance test. another twist for the * performance test would be testing for loop unrolling conditions (N<~4) and * testing JNI vs pure Java for the comparison functions. * * @todo do version of test with negative and positive integer keys so that the * search on the encoded keys detects whether signed bytes or unsigned * bytes are being compared. * * @todo write stress test with offset != 0. * * @todo do tests with JNI code linked in. note that we only use * {@link BytesUtil#compareBytesWithLenAndOffset(int, int, byte[], int, int, byte[])} for * searching on the key buffers since that allows non-zero offsets into * the search key. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class TestKeyBufferSearch extends TestCase2 { public TestKeyBufferSearch() { } public TestKeyBufferSearch(String name) { super(name); } // public static Test suite() { // // TestSuite suite = new TestSuite("IKeyBuffer.search"); // // suite.addTestSuite(TestBinarySearch.class); // suite.addTestSuite(TestLinearSearch.class); // // return suite; // // } // /* // * abstract search methods are implemented by subclasses for testing the // * linear vs binary search code. // */ public int search(AbstractKeyBuffer kbuf,byte[]key) { return kbuf.search(key); } /** * Test search for keys using both a mutable and an immutable key buffer and * a known set of keys. */ public void test_search01() { byte[][] keys = new byte[5][]; int i = 0; keys[i++] = new byte[]{5}; // offset := 0, insert before := -1 keys[i++] = new byte[]{7}; // offset := 1, insert before := -2 keys[i++] = new byte[]{9}; // offset := 2, insert before := -3 keys[i++] = new byte[]{11}; // offset := 3, insert before := -4 keys[i++] = new byte[]{13}; // offset := 4, insert before := -5 // insert after := -6 int nkeys = 5; MutableKeyBuffer kbuf = new MutableKeyBuffer(nkeys,keys); doSearchTest01(kbuf); ImmutableKeyBuffer kbuf2 = new ImmutableKeyBuffer( kbuf ); doSearchTest01(kbuf2); } /** * Search using the specified buffer which must be pre-initialized with a * known set of keys. * * @param kbuf The buffer to be searched. */ private void doSearchTest01(AbstractKeyBuffer kbuf) { // The general formula for the record offset is: // // offset := sizeof(record) * ( index - 1 ) // // The general formula for the insertion point is: // // insert := - ( offset + 1 ) // // where [offset] is the offset of the record before which the // new record should be inserted. // // verify offset of record found. // // Verify finds the first record in the array. assertEquals(0, search(kbuf, new byte[]{5})); // Verify finds the 2nd record in the array. assertEquals(1, search(kbuf, new byte[]{7})); // Verify finds the penultimate record in the array. assertEquals(3, search(kbuf, new byte[]{11})); // Verify finds the last record in the array. assertEquals(4, search(kbuf, new byte[]{13})); // // verify insertion points (key not found). // // Verify insertion point for key less than any value in the // array. assertEquals(-1, search(kbuf, new byte[]{4})); // Verify insertion point for key between first and 2nd // records. assertEquals(-2, search(kbuf, new byte[]{6})); // Verify insertion point for key between penultimate and last // records. assertEquals(-5, search(kbuf, new byte[]{12})); // Verify insertion point for key greater than the last record. assertEquals(-6, search(kbuf, new byte[]{14})); } /** * Tests with non-zero offset into a key buffer with a shared prefix * of 3 bytes. */ public void test_search02() { // build up keys in sorted order. int nkeys = 3; int maxKeys = 3; byte[][] keys = new byte[nkeys][]; int i = 0; keys[i++] = new byte[]{1,3,4}; // offset := 0, insert before := -1 keys[i++] = new byte[]{1,3,4,1,0}; // offset := 1, insert before := -2 keys[i++] = new byte[]{1,3,4,2}; // offset := 2, insert before := -3 // insert after := -4 { MutableKeyBuffer kbuf = new MutableKeyBuffer(nkeys, keys); assertEquals(3,kbuf.getPrefixLength()); doSearchTest02(kbuf); try { // correct rejection when search key is null. kbuf.search(null); fail("Expecting: " + IllegalArgumentException.class); } catch (IllegalArgumentException ex) { System.err.println("Ignoring expected exception: " + ex); } } { ImmutableKeyBuffer kbuf = new ImmutableKeyBuffer(nkeys, maxKeys, keys); assertEquals(3,kbuf.getPrefixLength()); doSearchTest02(kbuf); try { // correct rejection when search key is null. kbuf.search(null); fail("Expecting: " + IllegalArgumentException.class); } catch (IllegalArgumentException ex) { System.err.println("Ignoring expected exception: " + ex); } } } private void doSearchTest02(IKeyBuffer kbuf) { System.err.println("kbuf="+kbuf); assertEquals(0,kbuf.search(new byte[]{1,3,4})); assertEquals(1,kbuf.search(new byte[]{1,3,4,1,0})); assertEquals(2,kbuf.search(new byte[]{1,3,4,2})); // test search before/between/after keys. assertEquals(-1,kbuf.search(new byte[]{1,3,3})); assertEquals(-2,kbuf.search(new byte[]{1,3,4,0})); assertEquals(-3,kbuf.search(new byte[]{1,3,4,1,1})); assertEquals(-4,kbuf.search(new byte[]{1,3,4,3})); } /** * Test with prefixLength of zero and various search keys. */ public void test_search03() { int nkeys = 3; int maxKeys = 3; byte[][] keys = new byte[][] { new byte[]{2,3,5}, new byte[]{4,5,6}, new byte[]{5,4,9} }; doSearchTest03( new MutableKeyBuffer(nkeys,keys)); doSearchTest03( new ImmutableKeyBuffer(nkeys,maxKeys,keys)); } private void doSearchTest03(IKeyBuffer kbuf) { assert kbuf.getPrefixLength() == 0; assertEquals(0,kbuf.search( new byte[]{2,3,5})); assertEquals(1,kbuf.search( new byte[]{4,5,6})); assertEquals(2,kbuf.search( new byte[]{5,4,9})); // test search before given keys. assertEquals(-1,kbuf.search( new byte[]{1,3,3})); assertEquals(-1,kbuf.search( new byte[]{2,3,3})); assertEquals(-1,kbuf.search( new byte[]{2,3,4})); assertEquals(-1,kbuf.search( new byte[]{2})); assertEquals(-1,kbuf.search( new byte[]{0})); assertEquals(-1,kbuf.search( new byte[]{})); // test search between given keys. assertEquals(-2,kbuf.search( new byte[]{2,3,5,0})); assertEquals(-2,kbuf.search( new byte[]{4,5,5,9})); assertEquals(-3,kbuf.search( new byte[]{4,5,6,0})); assertEquals(-3,kbuf.search( new byte[]{5,4,8,9})); // test search after given keys. assertEquals(-4,kbuf.search( new byte[]{5,4,9,0})); assertEquals(-4,kbuf.search( new byte[]{5,5})); assertEquals(-4,kbuf.search( new byte[]{6})); } /** * Test search on empty key buffer. */ public void test_search04() { int nkeys = 0; int maxKeys = 4; byte[][] keys = new byte[][] {}; doSearchTest04( new MutableKeyBuffer(nkeys,keys)); doSearchTest04( new ImmutableKeyBuffer(nkeys,0,keys)); doSearchTest04( new ImmutableKeyBuffer(nkeys,maxKeys,keys)); } private void doSearchTest04(IKeyBuffer kbuf) { assertEquals(-1,kbuf.search(new byte[]{})); assertEquals(-1,kbuf.search(new byte[]{9,9,9,9})); } public void test_prefixMatchLength() { // build up keys in sorted order. int nkeys = 3; int maxKeys = 3; byte[][] keys = new byte[nkeys][]; int i = 0; keys[i++] = new byte[]{1,3,4}; // offset := 0, insert before := -1 keys[i++] = new byte[]{1,3,4,0,0}; // offset := 1, insert before := -2 keys[i++] = new byte[]{1,3,4,1}; // offset := 2, insert before := -3 // insert after := -4 { MutableKeyBuffer kbuf = new MutableKeyBuffer(nkeys,keys); doPrefixMatchLengthTest(kbuf); } { ImmutableKeyBuffer kbuf = new ImmutableKeyBuffer(nkeys,maxKeys,keys); doPrefixMatchLengthTest(kbuf); } } private void doPrefixMatchLengthTest(AbstractKeyBuffer kbuf) { // verify the prefix length. final int prefixLength = 3; assertEquals("prefixLength", prefixLength, kbuf.getPrefixLength()); // verify the prefix. assertEquals("prefix", new byte[]{1,3,4}, kbuf.getPrefix()); System.err.println("prefix="+BytesUtil.toString(kbuf.getPrefix())); /* * test on some keys that are in the buffer. all keys in the buffer must * match the entire shared prefix. */ assertEquals(0, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4 })); assertEquals(0, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4, 0, 0 })); assertEquals(0, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4, 1 })); /* * now test on some keys that also match the entire prefix but are not * found in the buffer. */ assertEquals(0, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4, 7 })); assertEquals(0, kbuf ._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4, 0, 1, 3 })); /* * now test on some keys that order _before_ the prefix and hence before * all keys in the buffer. we include cases where the search key is * shorter than the prefix and cases when it is longer than the prefix. */ /* * test search keys that have nothing in common with the prefix. */ assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 0 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 0, 0 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 0, 0, 0 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 0, 0, 0, 0 })); /* * test search keys that have only their first byte in common with the * prefix. */ assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 0 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 0, 0 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 0, 0, 0 })); /* * test search keys that have only their first two bytes in common with * the prefix. */ assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 0 })); assertEquals(-1, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 0, 0 })); /* * test search keys that match the entire prefix (three bytes). */ assertEquals(0, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4 })); assertEquals(0, kbuf._prefixMatchLength(prefixLength, new byte[] { 1, 3, 4, 0 })); /* * now test on some keys that order _after_ the prefix and hence after * all keys in the buffer. we include cases where the search key is * shorter than the prefix and cases when it is longer than the prefix. */ assertEquals(-(3) - 1, kbuf._prefixMatchLength(prefixLength, new byte[] { 9 })); assertEquals(-(3) - 1, kbuf._prefixMatchLength(prefixLength, new byte[] { 9, 9 })); assertEquals(-(3) - 1, kbuf._prefixMatchLength(prefixLength, new byte[] { 9, 9, 9 })); assertEquals(-(3) - 1, kbuf._prefixMatchLength(prefixLength, new byte[] { 9, 9, 9, 9, 9 })); } // public static class TestLinearSearch extends TestKeyBufferSearch { // // public int search(AbstractKeyBuffer kbuf,byte[]key) { // // return kbuf._linearSearch(offset,key); // // } // // } // // public static class TestBinarySearch extends TestKeyBufferSearch { // // public int search(AbstractKeyBuffer kbuf,int offset,byte[]key) { // // return kbuf._binarySearch(offset,key); // // } // // } // /** // * Performance test to identify the tradeoff point for binary for linear // * search. // * // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> // * @version $Id$ // */ // public static class PerformanceTest extends TestCase { // // public PerformanceTest() { // } // // public PerformanceTest(String name) { // super(name); // } // // public void testPerformance() { // // int ntrials = 50000; // // doPerformanceTest(ntrials); // // } // // Random r = new Random(); // // /** // * Generate a set of N random distinct byte[] keys in sorted order using // * an unsigned byte[] comparison function. // * // * @param nkeys The #of keys to generate. // * // * @param maxKeyLen The maximum length of a key. // */ // public byte[][] getRandomKeys(int nkeys, int maxKeyLen) { // // // used to ensure distinct keys. // Set<byte[]> set = new TreeSet<byte[]>(BytesUtil.UnsignedByteArrayComparator.INSTANCE); // // byte[][] keys = new byte[nkeys][]; // // int n = 0; // // while( n < nkeys ) { // // // random key length in [1:maxKeyLen]. // byte[] key = new byte[r.nextInt(maxKeyLen)+1]; // // // random data in the key. // r.nextBytes(key); // // if( set.add(key)) { // // keys[n++] = key; // // } // // } // // // place into sorted order. // Arrays.sort(keys,BytesUtil.UnsignedByteArrayComparator.INSTANCE); // // return keys; // // } // // /** // * Performance test comparing binary vs linear search. // * // * @param ntrials // */ // public void doPerformanceTest(int ntrials) { // // /* // * Note: searching large arrays first since that warms up the code // * even further and the difference between the linear vs binary // * algorithms will only show up at small N, which we test last with // * the "warmest" code. // */ // //int[] capacity = new int[]{8,16,32,48,64,96,128,256,512,768,1024}; // int[] capacity = new int[]{1024,768,512,256,128,96,64,48,32,24,16,12,8,4}; // // for( int k = 0; k < capacity.length; k++ ) { // // int nkeys = capacity[k]; // // /* // * @todo see how performance varies by average key length and // * consider the key length distribution as well as the key value // * distribution - the latter is important for interpolated // * search. // */ // byte[][] keys = getRandomKeys(nkeys, 20); // // { // // MutableKeyBuffer kbuf = new MutableKeyBuffer(nkeys, keys); // // final int prefixLength = kbuf.getPrefixLength(); // // // [0:prefixLength-1]. // final int offset = prefixLength == 0 ? 0 : r.nextInt(prefixLength); // // long elapsedLinear1 = doTest(true, ntrials, keys, kbuf); // // long elapsedBinary1 = doTest(false, ntrials, keys, kbuf); // // System.err // .println(" mutable[]: nkeys=" // + nkeys // + ", trials=" // + ntrials // + ", offset="+offset // + ", elapsedLinear=" // + elapsedLinear1 // + "ns" // + ", elapsedBinary=" // + elapsedBinary1 // + "ns" // + (elapsedLinear1 < elapsedBinary1 ? ", linear wins" // : ", binary wins") + " by " // + Math.abs(elapsedLinear1 - elapsedBinary1) // + "ns"); // } // // { // // ImmutableKeyBuffer kbuf = new ImmutableKeyBuffer(nkeys,keys); // // final int prefixLength = kbuf.getPrefixLength(); // // // [0:prefixLength-1]. // final int offset = prefixLength == 0 ? 0 : r.nextInt(prefixLength); // // long elapsedLinear2 = doTest(true, ntrials, keys, kbuf); // // long elapsedBinary2 = doTest(false, ntrials, keys, kbuf); // // System.err // .println("immutable[]: nkeys=" // + nkeys // + ", trials=" // + ntrials // + ", offset="+offset // + ", elapsedLinear=" // + elapsedLinear2 // + "ns" // + ", elapsedBinary=" // + elapsedBinary2 // + "ns" // + (elapsedLinear2 < elapsedBinary2 ? ", linear wins" // : ", binary wins") + " by " // + Math.abs(elapsedLinear2 - elapsedBinary2) // + "ns"); // } // // } // // } // // /** // * Time a bunch of searches. // * // * @param linear // * use linear search when true // * @param ntrials // * #of searches to perform. // * @param keys // * search keys are randomly selected from this array of keys. // * @param kbuf // * The key buffer on which the search will be performed. // * // * @return the elapsed time aggregated across the searches. // */ // public long doTest(boolean linear,int ntrials, byte[][] keys, AbstractKeyBuffer kbuf) { // // long elapsedNanos = 0; // // final int nkeys = kbuf.getKeyCount(); // // for( int i=0; i<ntrials; i++ ) { // // // index of the search key in the key buffer. // final int index = r.nextInt(nkeys); // // byte[] key = keys[ index ]; // // final int index2; // // long beginNanos = System.nanoTime(); // // if( linear ) { // // index2 = kbuf._linearSearch(offset,key); // // } else { // // index2 = kbuf._binarySearch(offset,key); // // } // // elapsedNanos += System.nanoTime() - beginNanos; // // // make sure the search result is correct. // if (index != index2) { // // fail("Expected to find search key at index=" + index // + ", not at index=" + index2 + "; searchKey=" + key // + ", key[index]=" // + BytesUtil.toString(kbuf.getKey(index)) // + ", key[index2]=" // + BytesUtil.toString(kbuf.getKey(index2))); // // } // // } // // return elapsedNanos; // // } // // } } --- NEW FILE: TestKeyBufferSerializer.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modi... [truncated message content] |
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/objndx Removed Files: ConditionalInsert.java DirtyChildIterator.java IIndex.java BatchInsert.java ReadOnlyFusedView.java AbstractBTree.java MutableKeyBuffer.java IndexSegmentPlan.java KeyBufferSerializer.java ReadOnlyIndex.java BTreeMetadata.java DefaultEvictionListener.java DataOutputBuffer.java Node.java IRangeQuery.java ISimpleBTree.java ChildIterator.java Counters.java IReadOnlyBatchOp.java AddressSerializer.java BTree.java IndexSegmentFileStore.java IIdentityAccess.java NoSuccessorException.java EntryIterator.java BatchRemove.java ILeafData.java compile.sh IndexSegmentExtensionMetadata.java INodeFactory.java BytesUtil.class IKeySerializer.java IAbstractNodeData.java IAbstractNode.java ConditionalInsertNoValue.java IEvictionListener.java IDirty.java Tuple.java ByteArrayValueSerializer.java IBatchBTree.java IKeyBuffer.java PackedAddressSerializer.java BytesUtil$UnsignedByteArrayComparator.class IValueSerializer.java package.html SuccessorUtil.java RecordCompressor.java EmptyEntryIterator.java IndexSegmentMerger.java ImmutableKeyBuffer.java Errors.java IndexSegment.java BytesUtil.java IndexSegmentMetadata.java UserDefinedFunction.java ILinearList.java AbstractKeyBuffer.java IndexSegmentBuilder.java IAddressSerializer.java AbstractNode.java DataInputBuffer.java KeyBuilder.java BytesUtil.c PO.java INodeData.java INodeIterator.java BatchContains.java NodeSerializer.java BatchLookup.java IBatchOp.java Leaf.java IFusedView.java IEntryIterator.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. --- IndexSegmentFileStore.java DELETED --- --- ConditionalInsertNoValue.java DELETED --- --- MutableKeyBuffer.java DELETED --- --- IFusedView.java DELETED --- --- BatchInsert.java DELETED --- --- SuccessorUtil.java DELETED --- --- IBatchBTree.java DELETED --- --- AbstractNode.java DELETED --- --- DirtyChildIterator.java DELETED --- --- ChildIterator.java DELETED --- --- IndexSegment.java DELETED --- --- ILinearList.java DELETED --- --- Counters.java DELETED --- --- ReadOnlyIndex.java DELETED --- --- BytesUtil.java DELETED --- --- ReadOnlyFusedView.java DELETED --- --- IAddressSerializer.java DELETED --- --- PO.java DELETED --- --- RecordCompressor.java DELETED --- --- IEvictionListener.java DELETED --- --- IKeyBuffer.java DELETED --- --- IndexSegmentMetadata.java DELETED --- --- IndexSegmentExtensionMetadata.java DELETED --- --- IDirty.java DELETED --- --- EmptyEntryIterator.java DELETED --- --- DataOutputBuffer.java DELETED --- --- IAbstractNodeData.java DELETED --- --- IAbstractNode.java DELETED --- --- ImmutableKeyBuffer.java DELETED --- --- IBatchOp.java DELETED --- --- ILeafData.java DELETED --- --- DataInputBuffer.java DELETED --- --- BytesUtil$UnsignedByteArrayComparator.class DELETED --- --- NoSuccessorException.java DELETED --- --- AbstractKeyBuffer.java DELETED --- --- IReadOnlyBatchOp.java DELETED --- --- IndexSegmentBuilder.java DELETED --- --- EntryIterator.java DELETED --- --- IEntryIterator.java DELETED --- --- Leaf.java DELETED --- --- INodeIterator.java DELETED --- --- BatchLookup.java DELETED --- --- INodeFactory.java DELETED --- --- package.html DELETED --- --- Tuple.java DELETED --- --- compile.sh DELETED --- --- Errors.java DELETED --- --- BTree.java DELETED --- --- IKeySerializer.java DELETED --- --- AddressSerializer.java DELETED --- --- PackedAddressSerializer.java DELETED --- --- KeyBufferSerializer.java DELETED --- --- IndexSegmentPlan.java DELETED --- --- UserDefinedFunction.java DELETED --- --- Node.java DELETED --- --- IIndex.java DELETED --- --- ISimpleBTree.java DELETED --- --- ConditionalInsert.java DELETED --- --- DefaultEvictionListener.java DELETED --- --- IValueSerializer.java DELETED --- --- IRangeQuery.java DELETED --- --- BatchContains.java DELETED --- --- ByteArrayValueSerializer.java DELETED --- --- IIdentityAccess.java DELETED --- --- BTreeMetadata.java DELETED --- --- INodeData.java DELETED --- --- BatchRemove.java DELETED --- --- KeyBuilder.java DELETED --- --- AbstractBTree.java DELETED --- --- BytesUtil.class DELETED --- --- BytesUtil.c DELETED --- --- NodeSerializer.java DELETED --- --- IndexSegmentMerger.java DELETED --- |
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/btree In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9284/src/java/com/bigdata/btree Added Files: BytesUtil$UnsignedByteArrayComparator.class ILinearList.java PO.java DataOutputBuffer.java BatchInsert.java IndexSegmentExtensionMetadata.java Tuple.java IBatchOp.java SuccessorUtil.java NoSuccessorException.java BytesUtil.c BTreeMetadata.java AbstractBTree.java ByteArrayValueSerializer.java Counters.java IndexSegment.java ReadOnlyIndex.java BTree.java IKeySerializer.java IEntryIterator.java BatchRemove.java IEvictionListener.java AbstractKeyBuffer.java ConditionalInsert.java Leaf.java compile.sh Node.java BatchContains.java IndexSegmentMetadata.java MutableKeyBuffer.java DirtyChildIterator.java BytesUtil.java IValueSerializer.java IKeyBuffer.java Errors.java ImmutableKeyBuffer.java IndexSegmentMerger.java IndexSegmentPlan.java DataInputBuffer.java IAbstractNode.java ReadOnlyFusedView.java IBatchBTree.java PackedAddressSerializer.java ConditionalInsertNoValue.java IDirty.java IRangeQuery.java IAddressSerializer.java IndexSegmentBuilder.java INodeFactory.java INodeIterator.java AddressSerializer.java INodeData.java UserDefinedFunction.java IAbstractNodeData.java KeyBufferSerializer.java AbstractNode.java package.html EntryIterator.java IFusedView.java BytesUtil.class IndexSegmentFileStore.java BatchLookup.java IIdentityAccess.java IIndex.java ISimpleBTree.java KeyBuilder.java IReadOnlyBatchOp.java NodeSerializer.java ILeafData.java EmptyEntryIterator.java RecordCompressor.java ChildIterator.java DefaultEvictionListener.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. --- NEW FILE: IndexSegmentFileStore.java --- package com.bigdata.btree; import it.unimi.dsi.mg4j.util.BloomFilter; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; import java.lang.reflect.Constructor; import java.nio.ByteBuffer; import org.apache.log4j.Logger; import com.bigdata.io.SerializerUtil; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.IRawStore; /** * A read-only store backed by a file. The section of the file containing the * index nodes may be fully buffered. * <p> * Note: An LRU disk cache is a poor choice for the leaves. Since the btree * already maintains a cache of the recently touched leaf objects, a recent read * against the disk is the best indication that we have that we will not want to * read that region again soon. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class IndexSegmentFileStore implements IRawStore { /** * Logger. */ protected static final Logger log = Logger .getLogger(IndexSegmentFileStore.class); /** * A buffer containing the disk image of the nodes in the index segment. * While some nodes will be held in memory by the hard reference queue * the use of this buffer means that reading a node that has fallen off * of the queue does not require any IOs. */ private ByteBuffer buf_nodes; /** * The file containing the index segment. */ protected final File file; /** * The random access file used to read the index segment. */ private RandomAccessFile raf; /** * A read-only view of the metadata record for the index segment. */ protected IndexSegmentMetadata metadata; /** * A read-only view of the extension metadata record for the index segment. */ protected IndexSegmentExtensionMetadata extensionMetadata; /** * True iff the store is open. */ private boolean open = false; /** * Open the read-only store. * * @param file * * @throws IOException * * @todo make it optional to fully buffer the index nodes? * @todo make it optional to fully buffer the leaves as well as the nodes? * * @see #load() */ public IndexSegmentFileStore(File file) { if (file == null) throw new IllegalArgumentException(); this.file = file; reopen(); } /** * Re-open a closed store. This operation should succeed if the backing file * is still accessible. * * @exception IllegalStateException * if the store is not closed. * * @see #close() */ public void reopen() { if (open) throw new IllegalStateException("Already open."); if (!file.exists()) { throw new RuntimeException("File does not exist: " + file.getAbsoluteFile()); } try { // open the file. this.raf = new RandomAccessFile(file, "r"); // read the metadata record from the file. this.metadata = new IndexSegmentMetadata(raf); IndexSegment.log.info(metadata.toString()); /* * Read in the extension metadata record. */ this.extensionMetadata = readExtensionMetadata(); /* * Read the index nodes from the file into a buffer. If there are no * index nodes then we skip this step. Note that we always read in * the root, so if the index is just a root leaf then the root will * be a deserialized object and the file will not be buffered in * memory. */ this.buf_nodes = (metadata.nnodes > 0 ? bufferIndexNodes(raf) : null); /* * Mark as open so that we can use read(long addr) to read other * data (the root node/leaf). */ this.open = true; } catch (IOException ex) { throw new RuntimeException(ex); } } /** * Load the {@link IndexSegment} or derived class from the store. The * {@link IndexSegment} or derived class MUST provide a public constructor * with the following signature: <code> * * <i>className</i>(IndexSegmentFileStore store) * * </code> * * @param store * The store. * * @return The {@link IndexSegment} or derived class loaded from that store. * * @see IndexSegmentExtensionMetadata, which provides a metadata extension * protocol for the {@link IndexSegment}. */ public IndexSegment load() { try { Class cl = Class.forName(extensionMetadata.getClassName()); Constructor ctor = cl .getConstructor(new Class[] { IndexSegmentFileStore.class }); IndexSegment seg = (IndexSegment) ctor .newInstance(new Object[] { this }); return seg; } catch(Exception ex) { throw new RuntimeException(ex); } } public boolean isOpen() { return open; } public boolean isStable() { return true; } public boolean isFullyBuffered() { return false; } public File getFile() { return file; } /** * Closes the file and releases the internal buffers and metadata records. * This operation may be reversed by {@link #reopen()} as long as the * backing file remains available. */ public void close() { if (!open) throw new IllegalStateException(); try { raf.close(); raf = null; buf_nodes = null; metadata = null; extensionMetadata = null; open = false; } catch (IOException ex) { throw new RuntimeException(ex); } } public void closeAndDelete() { close(); if(!file.delete()) { System.err.println("WARN: Could not delete: "+file.getAbsolutePath()); } } public long write(ByteBuffer data) { throw new UnsupportedOperationException(); } public void force(boolean metadata) { throw new UnsupportedOperationException(); } public long size() { return metadata.length; } /** * Read from the index segment. If the request is in the node region and * the nodes have been buffered then this uses a slice on the node * buffer. Otherwise this reads through to the backing file. */ public ByteBuffer read(long addr) { if (!open) throw new IllegalStateException(); final int offset = Addr.getOffset(addr); final int length = Addr.getByteCount(addr); final int offsetNodes = Addr.getOffset(metadata.addrNodes); ByteBuffer dst; if (offset >= offsetNodes && buf_nodes != null) { /* * the data are buffered. create a slice onto the read-only * buffer that reveals only those bytes that contain the desired * node. the position() of the slice will be zero(0) and the * limit() will be the #of bytes in the compressed record. */ // correct the offset so that it is relative to the buffer. int off = offset - offsetNodes; // System.err.println("offset="+offset+", length="+length); // set the limit on the buffer to the end of the record. buf_nodes.limit(off + length); // set the position on the buffer to the start of the record. buf_nodes.position(off); // create a slice of that view. dst = buf_nodes.slice(); } else { // Allocate buffer. dst = ByteBuffer.allocate(length); /* * the data need to be read from the file. */ dst.limit(length); dst.position(0); try { // read into [dst] - does not modify the channel's position(). raf.getChannel().read(dst, offset); } catch (IOException ex) { throw new RuntimeException(ex); } dst.flip(); // Flip buffer for reading. } return dst; } /** * Reads the index nodes into a buffer. * * @return A read-only view of a buffer containing the index nodes. */ protected ByteBuffer bufferIndexNodes(RandomAccessFile raf) throws IOException { if(metadata.addrNodes == 0L) { throw new IllegalStateException("No nodes."); } final int offset = Addr.getOffset(metadata.addrNodes); final int nbytes = Addr.getByteCount(metadata.addrLeaves); /* * Note: The direct buffer imposes a higher burden on the JVM and all * operations after we read the data from the disk should be faster with * a heap buffer, so my expectation is that a heap buffer is the correct * choice here. */ // ByteBuffer buf = ByteBuffer.allocateDirect(nbytes); ByteBuffer buf = ByteBuffer.allocate(nbytes); raf.getChannel().read(buf, offset); return buf.asReadOnlyBuffer(); } /** * Reads the bloom filter directly from the file. * * @return The bloom filter -or- <code>null</code> if the bloom filter was * not constructed when the {@link IndexSegment} was built. */ protected BloomFilter readBloomFilter() throws IOException { final long addr = metadata.addrBloom; if(addr == 0L) { return null; } log.info("reading bloom filter: "+Addr.toString(addr)); final int off = Addr.getOffset(addr); final int len = Addr.getByteCount(addr); ByteBuffer buf = ByteBuffer.allocate(len); buf.limit(len); buf.position(0); try { // read into [dst] - does not modify the channel's position(). final int nread = raf.getChannel().read(buf, off); assert nread == len; buf.flip(); // Flip buffer for reading. } catch (IOException ex) { throw new RuntimeException(ex); } assert buf.position() == 0; assert buf.limit() == len; // ByteBufferInputStream bais = new ByteBufferInputStream(buf); // //// ByteArrayInputStream bais = new ByteArrayInputStream(buf.array()); // // ObjectInputStream ois = new ObjectInputStream(bais); // // try { // // BloomFilter bloomFilter = (BloomFilter) ois.readObject(); // // log.info("Read bloom filter: minKeys=" + bloomFilter.size() // + ", entryCount=" + metadata.nentries + ", bytesOnDisk=" // + len + ", errorRate=" + metadata.errorRate); // // return bloomFilter; // // } // // catch(Exception ex) { // // IOException ex2 = new IOException("Could not read bloom filter: "+ex); // // ex2.initCause(ex); // // throw ex2; // // } BloomFilter bloomFilter = (BloomFilter) SerializerUtil.deserialize(buf); log.info("Read bloom filter: minKeys=" + bloomFilter.size() + ", entryCount=" + metadata.nentries + ", bytesOnDisk=" + len + ", errorRate=" + metadata.errorRate); return bloomFilter; } /** * Reads the {@link IndexSegmentExtensionMetadata} record directly from the * file. */ protected IndexSegmentExtensionMetadata readExtensionMetadata() throws IOException { final long addr = metadata.addrExtensionMetadata; assert addr != 0L; log.info("reading extension metadata record: "+Addr.toString(addr)); final int off = Addr.getOffset(addr); final int len = Addr.getByteCount(addr); ByteBuffer buf = ByteBuffer.allocate(len); buf.limit(len); buf.position(0); try { // read into [dst] - does not modify the channel's position(). final int nread = raf.getChannel().read(buf, off); assert nread == len; buf.flip(); // Flip buffer for reading. } catch (IOException ex) { throw new RuntimeException(ex); } assert buf.position() == 0; assert buf.limit() == len; IndexSegmentExtensionMetadata extensionMetadata = (IndexSegmentExtensionMetadata) SerializerUtil .deserialize(buf); log.info("Read extension metadata: " + extensionMetadata); return extensionMetadata; } } --- NEW FILE: ConditionalInsertNoValue.java --- package com.bigdata.btree; /** * User defined function supporting the conditional insert of a value iff no * entry is found under a search key. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ final public class ConditionalInsertNoValue implements UserDefinedFunction { private static final long serialVersionUID = 2942811843264522254L; public ConditionalInsertNoValue(Object value) { } /** * Do not insert if an entry is found. */ public Object found(byte[] key, Object oldval) { return oldval; } /** * Insert if not found. */ public Object notFound(byte[] key) { return INSERT_NULL; } public Object returnValue(byte[] key, Object oldval) { return oldval; } } --- NEW FILE: MutableKeyBuffer.java --- package com.bigdata.btree; /** * A mutable implementation of {@link IKeyBuffer}. * * @todo 27% of the search cost is dealing with the prefix. * @todo track prefix length for mutable keys (update when first/last key are * updated). at present the node/leaf logic directly manipulates the * keys. that will have to be changed to track the prefix length. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class MutableKeyBuffer extends AbstractKeyBuffer { /** * The #of defined keys. */ int nkeys; /** * An array containing the keys. The size of the array is the maximum * capacity of the key buffer. */ final byte[][] keys; /** * Allocate a mutable key buffer capable of storing <i>capacity</i> keys. * * @param capacity * The capacity of the key buffer. */ public MutableKeyBuffer(int capacity) { nkeys = 0; keys = new byte[capacity][]; } /** * Constructor wraps an existing byte[][]. * * @param nkeys * The #of defined keys in the array. * @param keys * The array of keys. */ public MutableKeyBuffer(int nkeys, byte[][] keys ) { assert nkeys >= 0; // allow deficient root. assert keys != null; assert keys.length >= nkeys; this.nkeys = nkeys; this.keys = keys; } /** * Creates a new instance using a new array of keys but sharing the key * references with the provided {@link MutableKeyBuffer}. * * @param src * An existing instance. */ public MutableKeyBuffer(MutableKeyBuffer src) { assert src != null; // assert capacity > src.nkeys; this.nkeys = src.nkeys; this.keys = new byte[src.keys.length][]; for( int i=0; i<nkeys; i++ ) { // Note: copies the reference. this.keys[i] = src.keys[i]; } } /** * Builds a mutable key buffer from an immutable key buffer. * * @param src * The immutable key buffer. */ public MutableKeyBuffer(ImmutableKeyBuffer src) { assert nkeys >= 0; // allow deficient root. assert src != null; nkeys = src.nkeys; keys = src.toKeyArray(); } /** * Returns a reference to the key at that index. */ final public byte[] getKey(int index) { // assert index >= 0 && index < nkeys; return keys[index]; } final public int getKeyCount() { return nkeys; } /** * The maximum #of keys that may be held in the buffer (its capacity). */ final public int getMaxKeys() { return keys.length; } /** * True iff the key buffer can not contain another key. */ final public boolean isFull() { return nkeys == keys.length; } /* * Mutation api. The contents of individual keys are never modified. Some of * the driver logic in Leaf and Node uses loops where nkeys is being * decremented while keys are being processed from, e.g., a split point to * the last key position. This has the effect that an assert such as * * index < nkeys * * will fail. Those looping constructs are not wrong as originally written * but a move to encapsulate the key buffer puts their treatment of nkeys at * odds with index testing since nkeys is temporarily inconsistent with the * keys[]. * * @todo maintain the prefix length. A trivial example of shortening the * shared prefix occurs when keys are inserted into the root leaf. Consider * that we first insert <code>4,5,6</code>. Since there is only one key * in the root leaf, the length of the prefix is the same as the length of * the key. If we then insert <code>4,5,6,1</code> the prefix does not * change. However, if we then insert <code>4,5,2</code> the prefix is now * shortened to <code>4,5</code>. If we insert <code>5</code>, then * the prefix is shortened to an empty byte[]. Prefix shortening can also * occur in trees with more than one level whenever a key is inserted into a * leaf and becomes either the first or last key in that leaf. Likewise, it * is possible for the prefix length to either grow when a leaf overflows * and keys are redistributed. */ /** * Set the key at the specified index. * * @param index * The index in [0:nkeys-1]. * @param key * The key (non-null). * * @todo Who uses this? Track prefixLength? */ final public void setKey(int index, byte[] key) { assert index >= 0 && index < nkeys; keys[index] = key; } /** * Set the key at the specified index to <code>null</code>. This is used * to clear elements of {@link #keys} that are no longer defined. The caller * is responsible for updating {@link #nkeys} when using this method. * * @param index * The key index in [0:maxKeys-1]; */ final protected void zeroKey(int index) { // assert index >= 0 && index < nkeys; keys[index] = null; } /** * Append a key to the end of the buffer, incrementing the #of keys in the * buffer. * * @param key * A key. * * @return The #of keys in the buffer. * * @todo update prefixLength, lazily compute prefix. */ final public int add(byte[] key) { assert nkeys < keys.length; assert key != null; keys[nkeys++] = key; return nkeys; } /** * Insert a key into the buffer at the specified index, incrementing the #of * keys in the buffer by one and moving down all keys from that index on * down by one (towards the end of the array). * * @param index * The index in [0:nkeys] (you are allowed to append using this * method). * @param key * The key. * * @return The #of keys in the buffer. * * @todo if index==0 || index==nkeys-1 then update prefixLength, lazily * compute prefix. */ final public int insert(int index, byte[] key) { assert index >= 0 && index <= nkeys; if( index == nkeys ) { // append return add(key); } /* index = 2; * nkeys = 6; * * [ 0 1 2 3 4 5 ] * ^ index * * count = keys - index = 4; */ final int count = nkeys - index; assert count >= 1; System.arraycopy(keys, index, keys, index+1, 1); keys[index] = key; return ++nkeys; } /** * Remove a key in the buffer at the specified index, decrementing the #of * keys in the buffer by one and moving up all keys from that index on down * by one (towards the start of the array). * * @param index * The index in [0:nkeys-1]. * @param key * The key. * * @return The #of keys in the buffer. * * @todo if index==0 || index==nkeys-1 then update prefixLength, lazily * compute prefix (requires that the application never directly * modifies keys). */ final public int remove(int index) { assert index >= 0 && index < nkeys; /* * Copy down to cover up the hole. */ final int length = nkeys - index - 1; if(length > 0) { System.arraycopy(keys, index + 1, keys, index, length); } keys[--nkeys] = null; return nkeys; } public String toString() { StringBuilder sb = new StringBuilder(); sb.append("nkeys=" + nkeys); sb.append(", maxKeys=" + keys.length); sb.append(", prefix=" + BytesUtil.toString(getPrefix())); sb.append(", ["); for (int i = 0; i < keys.length; i++) { if (i > 0) sb.append(", "); byte[] key = keys[i]; if (key == null) { sb.append("null"); } else { sb.append(BytesUtil.toString(key)); } } sb.append("]"); return sb.toString(); } public MutableKeyBuffer toMutableKeyBuffer() { // if( capacity < nkeys + 1 ) throw new IllegalArgumentException(); // return new MutableKeyBuffer(capacity,this); return new MutableKeyBuffer(this); } final public int search(final byte[] searchKey) { if (searchKey == null) throw new IllegalArgumentException("searchKey is null"); if( nkeys == 0 ) { /* * If there are no keys in the buffer, then any key would be * inserted at the first buffer position. */ return -1; } /* * The length of the prefix shared by all keys in the buffer. */ final int prefixLength = getPrefixLength(); /* * Attempt to match the shared prefix. If we can not then return the * insert position, which is either before the first key or after the * last key in the buffer. */ final int insertPosition = _prefixMatchLength(prefixLength, searchKey); if( insertPosition < 0 ) { return insertPosition; } /* * Search keys, but only bytes from prefixLength on in each key. */ if (nkeys < 16) { return _linearSearch(prefixLength, searchKey); } else { return _binarySearch(prefixLength, searchKey); } } final protected int _prefixMatchLength(final int prefixLength, final byte[] searchKey) { final int searchKeyLen = searchKey.length; /* * Do not compare more bytes than remain in either the search key or the * prefix, e.g., compareLen := min(searchKeyLen, prefixLen). */ final int compareLen = (searchKeyLen <= prefixLength) ? searchKeyLen : prefixLength; int ret = BytesUtil.compareBytesWithLenAndOffset(// 0, compareLen, searchKey,// 0, compareLen, keys[0]// ); if (ret < 0) { /* insert before the first key. */ return -1; } else if (ret > 0) { /* insert after the last key. */ return -(nkeys) - 1; } else { /* * For the case when the search key is _shorter_ than the prefix, * matching on all bytes of the search key means that the search key * will be ordered before all keys in the buffer. */ if (searchKeyLen < prefixLength) return -1; /* * entire prefix matched, continue to search the remainder for each * key. */ return 0; } } final protected int _linearSearch(final int searchKeyOffset, final byte[] searchKey) { // #of bytes to search in the search key after the prefix match. final int searchKeyLen = searchKey.length - searchKeyOffset; // searching zero or more bytes in the search key after the prefix match. assert searchKeyLen >= 0; for (int i = 0; i < nkeys; i++) { final byte[] key = keys[i]; final int keyLen = key.length - searchKeyOffset; assert keyLen >= 0; // skip the first offset bytes, then compare no more bytes than // remain in the key. final int ret = BytesUtil.compareBytesWithLenAndOffset(// searchKeyOffset, keyLen, key,// searchKeyOffset, searchKeyLen, searchKey// ); if (ret == 0) return i; if (ret > 0) return -(i + 1); } return -(nkeys + 1); } final protected int _binarySearch(final int searchKeyOffset, final byte[] searchKey) { final int searchKeyLen = searchKey.length - searchKeyOffset; assert searchKeyLen >= 0; int low = 0; int high = nkeys - 1; while (low <= high) { final int mid = (low + high) >> 1; final byte[] key = keys[mid]; final int keyLen = key.length - searchKeyOffset; assert keyLen >= 0; // skip the first offset bytes, then compare no more bytes than // remain in the key. final int ret = BytesUtil.compareBytesWithLenAndOffset(// searchKeyOffset, keyLen, key,// searchKeyOffset, searchKeyLen, searchKey// ); if (ret < 0) { low = mid + 1; } else if (ret > 0) { high = mid - 1; } else { // Found: return offset. return mid; } } // Not found: return insertion point. return -(low + 1); } /** * Verifies that the keys are in sort order and that undefined keys are * [null]. */ protected final void assertKeysMonotonic() { for (int i = 1; i < nkeys; i++) { if (BytesUtil.compareBytes(keys[i], keys[ i - 1] ) <= 0) { throw new AssertionError("Keys out of order at index=" + i + ", keys=" + this.toString()); } } for( int i=nkeys; i<keys.length; i++ ) { if( keys[i] != null ) { throw new AssertionError("Expecting null at index="+i); } } } /** * Computes the length of the prefix by computed by counting the #of leading * bytes that match for the first and last key in the buffer. */ public int getPrefixLength() { if( nkeys == 0 ) return 0; if( nkeys == 1 ) return keys[0].length; return BytesUtil.getPrefixLength(keys[0], keys[nkeys - 1]); } /** * Computes the #of leading bytes shared by all keys and returns a new * byte[] containing those bytes. */ public byte[] getPrefix() { if( nkeys == 0 ) return EMPTY_PREFIX; if( nkeys == 1) return keys[0]; return BytesUtil.getPrefix(keys[0], keys[nkeys-1]); } private static final transient byte[] EMPTY_PREFIX = new byte[]{}; } --- NEW FILE: IFusedView.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 6, 2007 */ package com.bigdata.btree; /** * A marker interface indicating fused view providing read-only operations on * multiple B+-Trees mapping variable length unsigned byte[] keys to arbitrary * values. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public interface IFusedView extends IIndex { /** * Return the ordered array of sources from which the fused view is reading. */ public AbstractBTree[] getSources(); } --- NEW FILE: BatchInsert.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 12, 2007 */ package com.bigdata.btree; /** * Data for a batch insert operation. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class BatchInsert implements IBatchOp { /** * The #of tuples to be processed. */ public final int ntuples; /** * The keys for each tuple. */ public final byte[][] keys; /** * The value corresponding to each key. */ public final Object[] values; /** * The index of the tuple that is currently being processed. */ public int tupleIndex = 0; /** * Create a batch insert operation. * * Batch insert operation of N tuples presented in sorted order. This * operation can be very efficient if the tuples are presented sorted by key * order. * * @param ntuples * The #of tuples that are being inserted(in). * @param keys * A series of keys paired to values (in). Each key is an * variable length unsigned byte[]. The keys must be presented in * sorted order in order to obtain maximum efficiency for the * batch operation.<br> * The individual byte[] keys provided to this method MUST be * immutable - if the content of a given byte[] in <i>keys</i> * is changed after the method is invoked then the change MAY * have a side-effect on the keys stored in leaves of the tree. * While this constraint applies to the individual byte[] keys, * the <i>keys</i> byte[][] itself may be reused from invocation * to invocation without side-effect. * @param values * Values (one element per key) (in/out). Null elements are * allowed. On output, each element is either null (if there was * no entry for that key) or the old value stored under that key * (which may be null). */ public BatchInsert(int ntuples, byte[][] keys, Object[] values) { if (ntuples <= 0) throw new IllegalArgumentException(Errors.ERR_NTUPLES_NON_POSITIVE); if (keys == null) throw new IllegalArgumentException(Errors.ERR_KEYS_NULL); if( keys.length < ntuples ) throw new IllegalArgumentException(Errors.ERR_NOT_ENOUGH_KEYS); if (values == null) throw new IllegalArgumentException(Errors.ERR_VALS_NULL); if( values.length < ntuples ) throw new IllegalArgumentException(Errors.ERR_NOT_ENOUGH_VALS); this.ntuples = ntuples; this.keys = keys; this.values = values; } /** * Applies the operator using {@link ISimpleBTree#insert(Object, Object)} * * @param btree */ public void apply(ISimpleBTree btree) { while (tupleIndex < ntuples) { values[tupleIndex] = btree.insert(keys[tupleIndex], values[tupleIndex]); tupleIndex ++; } } } --- NEW FILE: SuccessorUtil.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 3, 2007 */ package com.bigdata.btree; /** * Utility methods for computing the successor of a value for various data * types. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @see BytesUtil#successor(byte[]), which computes the successor of a variable * length unsigned byte[]. */ public class SuccessorUtil { /* * useful single precision values. */ /** * Positive zero (+0f). * <p> * Note: +0f and -0f will compare as _equal_ in the language. This means * that you can not write tests that directly distinguish positive and * negative zero using >, <, or ==. */ final public static float FPOS_ZERO = Float.intBitsToFloat(0x00000000); /** * Negative zero (-0f). * <p> * Note: +0f and -0f will compare as _equal_ in the language. This means * that you can not write tests that directly distinguish positive and * negative zero using >, <, or ==. */ final public static float FNEG_ZERO = Float.intBitsToFloat(0x80000000); /** * Smallest non-zero positive value. */ final public static float FPOS_MIN = Float.intBitsToFloat(0x00000001); /** * Smallest non-zero negative value. */ final public static float FNEG_MIN = Float.intBitsToFloat(0x80000001); /** * Positive one (1f). */ final public static float FPOS_ONE = 1f; /** * Negative one (-1f). */ final public static float FNEG_ONE = -1f; /** * Largest positive float. */ final public static float FPOS_MAX = Float.MAX_VALUE; /** * Largest negative float. */ final public static float FNEG_MAX = Float.intBitsToFloat(0xFF7FFFFF); /* * useful double precision values. */ /** * Positive zero (+0d). * <p> * Note: +0d and -0d will compare as _equal_ in the language. This means * that you can not write tests that directly distinguish positive and * negative zero using >, <, or ==. */ final public static double DPOS_ZERO = Double.longBitsToDouble(0x0000000000000000L); /** * Negative zero (-0d). * <p> * Note: +0d and -0d will compare as _equal_ in the language. This means * that you can not write tests that directly distinguish positive and * negative zero using >, <, or ==. */ final public static double DNEG_ZERO = Double.longBitsToDouble(0x8000000000000000L); /** * Smallest non-zero positive value. */ final public static double DPOS_MIN = Double.longBitsToDouble(0x0000000000000001L); /** * Smallest non-zero negative value. */ final public static double DNEG_MIN = Double.longBitsToDouble(0x8000000000000001L); /** * Positive one (1d). */ final public static double DPOS_ONE = 1d; /** * Negative one (-1d). */ final public static double DNEG_ONE = -1d; /** * Largest positive double. */ final public static double DPOS_MAX = Double.MAX_VALUE; /** * Largest negative double. */ final public static double DNEG_MAX = Double.longBitsToDouble(0xFFEFFFFFFFFFFFFFL); /* * successor methods. */ /** * Computes the successor of a <code>byte</code> value. * * @param n * A value * * @return The successor of that value. * * @exception NoSuccessorException * if there is no successor for that value. */ public static byte successor( byte n ) throws NoSuccessorException { if (Byte.MAX_VALUE == n) { throw new NoSuccessorException(); } else { return (byte) (n + 1); } } /** * Computes the successor of a <code>char</code> value. * * @param n * A value * * @return The successor of that value. * * @exception NoSuccessorException * if there is no successor for that value. */ public static char successor( char n ) throws NoSuccessorException { if (Character.MAX_VALUE == n) { throw new NoSuccessorException(); } else { return (char) (n + 1); } } /** * Computes the successor of a <code>short</code> value. * * @param n * A value * * @return The successor of that value. * * @exception NoSuccessorException * if there is no successor for that value. */ public static short successor( short n ) throws NoSuccessorException { if (Short.MAX_VALUE == n) { throw new NoSuccessorException(); } else { return (short) (n + 1); } } /** * Computes the successor of an <code>int</code> value. * * @param n * A value * * @return The successor of that value. * * @exception NoSuccessorException * if there is no successor for that value. */ public static int successor( int n ) throws NoSuccessorException { if (Integer.MAX_VALUE == n) { throw new NoSuccessorException(); } else { return n + 1; } } /** * Computes the successor of a <code>long</code> value. * * @param n * A value * * @return The successor of that value. * * @exception NoSuccessorException * if there is no successor for that value. */ public static long successor( long n ) throws NoSuccessorException { if (Long.MAX_VALUE == n) { throw new NoSuccessorException(); } else { return n + 1L; } } /** * <p> * Computes the successor of a <code>float</code> value. * </p> * <p> * The IEEE floating point standard provides a means for computing the next * larger or smaller floating point value using a bit manipulation trick. * See <a * href="http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm"> * Comparing floating point numbers </a> by Bruce Dawson. The Java * {@link Float} and {@link Double} clases provide the static methods * required to convert a float or double into its IEEE 754 floating point * bit layout, which can be treated as an int (for floats) or a long (for * doubles). By testing for the sign, you can just add (or subtract) one (1) * to get the bit pattern of the successor (see the above referenced * article). Special exceptions must be made for NaNs, negative infinity and * positive infinity. * </p> * * @param f * The float value. * * @return The next value in the value space for <code>float</code>. * * @exception NoSuccessorException * if there is no next value in the value space. */ static public float successor( float f ) throws NoSuccessorException { if (f == Float.MAX_VALUE) { return Float.POSITIVE_INFINITY; } if (Float.isNaN(f)) { throw new NoSuccessorException("NaN"); } if (Float.isInfinite(f)) { if (f > 0) { throw new NoSuccessorException("Positive Infinity"); } else { /* no successor for negative infinity (could be the largest * negative value). */ throw new NoSuccessorException("Negative Infinity"); } } int bits = Float.floatToIntBits(f); if (bits == 0x80000000) { /* * the successor of -0.0f is +0.0f * * @todo Java defines the successor of floating point zeros as the * first non-zero value so maybe we should change this. */ return +0.0f; } if (f >= +0.0f) { bits += 1; } else { bits -= 1; } float nxt = Float.intBitsToFloat(bits); return nxt; } /** * <p> * Computes the successor of a <code>double</code> value. * </p> * <p> * The IEEE floating point standard provides a means for computing the next * larger or smaller floating point value using a bit manipulation trick. * See <a * href="http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm"> * Comparing floating point numbers </a> by Bruce Dawson. The Java * {@link Float} and {@link Double} clases provide the static methods * required to convert a float or double into its IEEE 754 floating point * bit layout, which can be treated as an int (for floats) or a long (for * doubles). By testing for the sign, you can just add (or subtract) one (1) * to get the bit pattern of the successor (see the above referenced * article). Special exceptions must be made for NaNs, negative infinity and * positive infinity. * </p> * * @param d The double value. * * @return The next value in the value space for <code>double</code>. * * @exception NoSuccessorException * if there is no next value in the value space. */ public static double successor( double d ) throws NoSuccessorException { if (d == Double.MAX_VALUE) { return Double.POSITIVE_INFINITY; } if (Double.isNaN(d)) { throw new NoSuccessorException("Nan"); } if (Double.isInfinite(d)) { if (d > 0) { throw new NoSuccessorException("Positive Infinity"); } else { // The successor of negative infinity. return Double.MIN_VALUE; } } long bits = Double.doubleToLongBits(d); if (bits == 0x8000000000000000L) { /* the successor of -0.0d is +0.0d * * @todo Java defines the successor of floating point zeros as the * first non-zero value so maybe we should change this. */ return +0.0d; } // if (f >= +0.0f) { if (d >= +0.0) { bits += 1; } else { bits -= 1; } double nxt = Double.longBitsToDouble(bits); return nxt; } /** * The successor of a string value is formed by appending a <code>nul</code>. * The successor of a <code>null</code> string reference is an empty * string. The successor of a string value is defined unless the string is * too long. * * @param s The string reference or <code>null</code>. * * @return The successor and never <code>null</code> */ public static String successor(String s) { if (s == null) return "\0"; return s + "\0"; } } --- NEW FILE: IBatchBTree.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 1, 2007 */ package com.bigdata.btree; /** * <p> * Interface for batch operations a B+-Tree mapping variable length unsigned * byte[] keys to arbitrary values. Batch operations can be very efficient if * the keys are presented in sorted order. * </p> * <p> * All mutation operations on a {@link BTree} are executed in a single threaded * context and are therefore atomic. A batch api operation that does NOT span * more than one index partition is therefore atomic. However, if an operation * spans multiple partitions of an index then NO GUARENTEE is made that the * operation is atomic over the set of index partitions. * </p> * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @see KeyBuilder, which may be used to encode one or more values into a * variable length unsigned byte[] key. * * @todo add batch api for rangeCount and rangeQuery. * * @todo support batch api for indexOf(), keyAt(), valueAt()? * * @todo add extensible operation defined by an vector * {@link UserDefinedFunction}. Use this to move application logic to the * indices as an alternative to scalar {@link UserDefinedFunction}s. For * example, the logic that tests an index for a key, increments a counter * if the key is not found, and inserts the counter under the key. */ public interface IBatchBTree { /** * Apply a batch insert operation. */ public void insert(BatchInsert op); /** * Apply a batch lookup operation. */ public void lookup(BatchLookup op); /** * Apply a batch existence test operation. */ public void contains(BatchContains op); /** * Apply a batch remove operation. */ public void remove(BatchRemove op); } --- NEW FILE: AbstractNode.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations [...1232 lines suppressed...] * * @return A string suitable for indent at that height. */ protected static String indent(int height) { if( height == -1 ) { // The height is not defined. return ""; } return ws.substring(0, height * 4); } private static final transient String ws = " "; } --- NEW FILE: DirtyChildIterator.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Nov 15, 2006 */ package com.bigdata.btree; import java.lang.ref.WeakReference; import java.util.NoSuchElementException; /** * Visits the direct dirty children of a {@link Node} in the external key * ordering. Since dirty nodes are always resident this iterator never forces a * child to be loaded from the store. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ class DirtyChildIterator implements INodeIterator { private final Node node; /** * The index of the next child to return. */ private int index = 0; /** * The index of the last child that was returned to the caller via * {@link #next()}. */ private int lastVisited = -1; /** * The next child to return or null if we need to scan for the next child. * We always test to verify that the child is in fact dirty in * {@link #next()} since it may have been written out between * {@link #hasNext()} and {@link #next()}. */ private AbstractNode child = null; /** * * @param node * The node whose direct dirty children will be visited in key * order. */ public DirtyChildIterator(Node node) { assert node != null; this.node = node; } /** * @return true iff there is a dirty child having a separator key greater * than the last visited dirty child at the moment that this method * was invoked. If this method returns <code>true</code> then an * immediate invocation of {@link #next()} will succeed. However, * that guarentee does not hold if intervening code forces the * scheduled dirty child to be written onto the store. */ public boolean hasNext() { /* * If we are only visiting dirty children, then we need to test the * current index. If it is not a dirty child, then we need to scan until * we either exhaust the children or find a dirty index. */ if( child != null && child.isDirty() ) { /* * We have a child reference and it is still dirty. */ return true; } for( ; index <= node.nkeys; index++ ) { WeakReference<AbstractNode> childRef = node.childRefs[index]; if( childRef == null ) continue; child = childRef.get(); if( child == null ) continue; if( ! child.isDirty() ) continue; /* * Note: We do NOT touch the hard reference queue here since the * DirtyChildrenIterator is used when persisting a node using a * post-order traversal. If a hard reference queue eviction drives * the serialization of a node and we touch the hard reference queue * during the post-order traversal then we break down the semantics * of HardReferenceQueue#append(...) as the eviction does not * necessarily cause the queue to reduce in length. */ // /* // * Touch the child so that it will not be a candidate for eviction // * to the store. // */ // node.btree.touch(node); ... [truncated message content] |
From: Bryan T. <tho...@us...> - 2007-04-13 15:02:41
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/inf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9212/src/java/com/bigdata/rdf/inf Modified Files: AbstractRuleRdf.java InferenceEngine.java RuleRdf01.java AbstractRuleRdfs68101213.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: AbstractRuleRdf.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/inf/AbstractRuleRdf.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** AbstractRuleRdf.java 12 Apr 2007 23:59:21 -0000 1.5 --- AbstractRuleRdf.java 13 Apr 2007 15:02:34 -0000 1.6 *************** *** 48,53 **** import org.openrdf.model.URI; ! import com.bigdata.objndx.IEntryIterator; ! import com.bigdata.objndx.IIndex; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; --- 48,53 ---- import org.openrdf.model.URI; ! import com.bigdata.btree.IEntryIterator; ! import com.bigdata.btree.IIndex; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; Index: InferenceEngine.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/inf/InferenceEngine.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** InferenceEngine.java 12 Apr 2007 23:59:21 -0000 1.11 --- InferenceEngine.java 13 Apr 2007 15:02:34 -0000 1.12 *************** *** 50,57 **** import org.openrdf.vocabulary.RDFS; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Options; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IIndex; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; --- 50,57 ---- import org.openrdf.vocabulary.RDFS; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IIndex; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Options; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; Index: RuleRdf01.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/inf/RuleRdf01.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** RuleRdf01.java 12 Apr 2007 23:59:21 -0000 1.5 --- RuleRdf01.java 13 Apr 2007 15:02:34 -0000 1.6 *************** *** 46,50 **** import java.util.Vector; ! import com.bigdata.objndx.IEntryIterator; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; --- 46,50 ---- import java.util.Vector; ! import com.bigdata.btree.IEntryIterator; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; Index: AbstractRuleRdfs68101213.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/inf/AbstractRuleRdfs68101213.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** AbstractRuleRdfs68101213.java 12 Apr 2007 23:59:21 -0000 1.5 --- AbstractRuleRdfs68101213.java 13 Apr 2007 15:02:34 -0000 1.6 *************** *** 46,50 **** import java.util.Vector; ! import com.bigdata.objndx.IEntryIterator; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; --- 46,50 ---- import java.util.Vector; ! import com.bigdata.btree.IEntryIterator; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TempTripleStore; |
From: Bryan T. <tho...@us...> - 2007-04-13 15:02:41
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9212/src/java/com/bigdata/rdf Modified Files: AutoIncCounter.java TempTripleStore.java BatchTermInsertOp.java RdfKeyBuilder.java TripleStore.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: TempTripleStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/TempTripleStore.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TempTripleStore.java 27 Mar 2007 14:35:08 -0000 1.3 --- TempTripleStore.java 13 Apr 2007 15:02:34 -0000 1.4 *************** *** 53,61 **** import org.apache.log4j.Logger; import com.bigdata.journal.TemporaryStore; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.inf.SPO; --- 53,61 ---- import org.apache.log4j.Logger; + import com.bigdata.btree.BTree; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.KeyBuilder; import com.bigdata.journal.TemporaryStore; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.inf.SPO; Index: RdfKeyBuilder.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/RdfKeyBuilder.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** RdfKeyBuilder.java 29 Mar 2007 17:01:47 -0000 1.9 --- RdfKeyBuilder.java 13 Apr 2007 15:02:34 -0000 1.10 *************** *** 55,59 **** import org.openrdf.vocabulary.XmlSchema; ! import com.bigdata.objndx.KeyBuilder; /** --- 55,59 ---- import org.openrdf.vocabulary.XmlSchema; ! import com.bigdata.btree.KeyBuilder; /** Index: BatchTermInsertOp.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/BatchTermInsertOp.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** BatchTermInsertOp.java 6 Mar 2007 20:38:13 -0000 1.1 --- BatchTermInsertOp.java 13 Apr 2007 15:02:34 -0000 1.2 *************** *** 50,57 **** import java.util.Arrays; ! import com.bigdata.objndx.Errors; ! import com.bigdata.objndx.IBatchOp; ! import com.bigdata.objndx.IIndex; ! import com.bigdata.objndx.ISimpleBTree; import com.bigdata.rdf.model.OptimizedValueFactory.TermIdComparator; import com.bigdata.rdf.model.OptimizedValueFactory._Value; --- 50,57 ---- import java.util.Arrays; ! import com.bigdata.btree.Errors; ! import com.bigdata.btree.IBatchOp; ! import com.bigdata.btree.IIndex; ! import com.bigdata.btree.ISimpleBTree; import com.bigdata.rdf.model.OptimizedValueFactory.TermIdComparator; import com.bigdata.rdf.model.OptimizedValueFactory._Value; Index: AutoIncCounter.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/AutoIncCounter.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** AutoIncCounter.java 21 Feb 2007 20:16:43 -0000 1.4 --- AutoIncCounter.java 13 Apr 2007 15:02:34 -0000 1.5 *************** *** 7,13 **** import java.nio.ByteBuffer; import com.bigdata.io.ByteBufferInputStream; import com.bigdata.journal.ICommitter; - import com.bigdata.objndx.UserDefinedFunction; import com.bigdata.rawstore.IRawStore; import com.bigdata.rdf.rio.BulkRioLoader; --- 7,13 ---- import java.nio.ByteBuffer; + import com.bigdata.btree.UserDefinedFunction; import com.bigdata.io.ByteBufferInputStream; import com.bigdata.journal.ICommitter; import com.bigdata.rawstore.IRawStore; import com.bigdata.rdf.rio.BulkRioLoader; Index: TripleStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/TripleStore.java,v retrieving revision 1.27 retrieving revision 1.28 diff -C2 -d -r1.27 -r1.28 *** TripleStore.java 12 Apr 2007 23:59:21 -0000 1.27 --- TripleStore.java 13 Apr 2007 15:02:34 -0000 1.28 *************** *** 65,79 **** import org.openrdf.model.Value; import com.bigdata.journal.ICommitRecord; import com.bigdata.journal.IJournal; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; - import com.bigdata.objndx.BTree; - import com.bigdata.objndx.Errors; - import com.bigdata.objndx.IBatchOp; - import com.bigdata.objndx.IEntryIterator; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.ISimpleBTree; - import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.inf.SPO; --- 65,79 ---- import org.openrdf.model.Value; + import com.bigdata.btree.BTree; + import com.bigdata.btree.Errors; + import com.bigdata.btree.IBatchOp; + import com.bigdata.btree.IEntryIterator; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.ISimpleBTree; + import com.bigdata.btree.KeyBuilder; import com.bigdata.journal.ICommitRecord; import com.bigdata.journal.IJournal; import com.bigdata.journal.Journal; import com.bigdata.journal.Tx; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.inf.SPO; |
From: Bryan T. <tho...@us...> - 2007-04-13 15:02:41
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9212/src/test/com/bigdata/rdf Modified Files: TestRdfKeyBuilder.java TestInsertRateStore.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: TestRdfKeyBuilder.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/TestRdfKeyBuilder.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestRdfKeyBuilder.java 27 Jan 2007 15:58:57 -0000 1.3 --- TestRdfKeyBuilder.java 13 Apr 2007 15:02:35 -0000 1.4 *************** *** 53,58 **** import org.openrdf.vocabulary.XmlSchema; ! import com.bigdata.objndx.BytesUtil; ! import com.bigdata.objndx.KeyBuilder; /** --- 53,58 ---- import org.openrdf.vocabulary.XmlSchema; ! import com.bigdata.btree.BytesUtil; ! import com.bigdata.btree.KeyBuilder; /** Index: TestInsertRateStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/TestInsertRateStore.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** TestInsertRateStore.java 22 Feb 2007 16:58:59 -0000 1.5 --- TestInsertRateStore.java 13 Apr 2007 15:02:35 -0000 1.6 *************** *** 55,60 **** import org.openrdf.model.impl.ValueFactoryImpl; ! import com.bigdata.objndx.BTree; ! import com.bigdata.objndx.BytesUtil.UnsignedByteArrayComparator; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.model.OptimizedValueFactory; --- 55,60 ---- import org.openrdf.model.impl.ValueFactoryImpl; ! import com.bigdata.btree.BTree; ! import com.bigdata.btree.BytesUtil.UnsignedByteArrayComparator; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.model.OptimizedValueFactory; |
From: Bryan T. <tho...@us...> - 2007-04-13 15:02:41
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/model In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9212/src/java/com/bigdata/rdf/model Modified Files: OptimizedValueFactory.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: OptimizedValueFactory.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/model/OptimizedValueFactory.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** OptimizedValueFactory.java 12 Apr 2007 23:59:21 -0000 1.6 --- OptimizedValueFactory.java 13 Apr 2007 15:02:34 -0000 1.7 *************** *** 65,69 **** import org.openrdf.sesame.sail.StatementIterator; ! import com.bigdata.objndx.BytesUtil; import com.bigdata.rdf.RdfKeyBuilder; import com.bigdata.rdf.TripleStore; --- 65,69 ---- import org.openrdf.sesame.sail.StatementIterator; ! import com.bigdata.btree.BytesUtil; import com.bigdata.rdf.RdfKeyBuilder; import com.bigdata.rdf.TripleStore; |
From: Bryan T. <tho...@us...> - 2007-04-13 15:02:38
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/sail In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9212/src/java/com/bigdata/rdf/sail Modified Files: SimpleRdfRepository.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: SimpleRdfRepository.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/sail/SimpleRdfRepository.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** SimpleRdfRepository.java 12 Apr 2007 23:59:22 -0000 1.1 --- SimpleRdfRepository.java 13 Apr 2007 15:02:34 -0000 1.2 *************** *** 80,86 **** import org.openrdf.sesame.sail.util.SingleStatementIterator; import com.bigdata.journal.Journal; import com.bigdata.journal.Options; - import com.bigdata.objndx.IEntryIterator; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TripleStore; --- 80,86 ---- import org.openrdf.sesame.sail.util.SingleStatementIterator; + import com.bigdata.btree.IEntryIterator; import com.bigdata.journal.Journal; import com.bigdata.journal.Options; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TripleStore; |
From: Bryan T. <tho...@us...> - 2007-04-13 15:02:38
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv9212/src/java/com/bigdata/rdf/rio Modified Files: Buffer.java MultiThreadedPresortRioLoader.java BulkLoaderBuffer.java BulkRioLoader.java Log Message: Package rename from com.bigdata.objndx to com.bigdata.btree. Since we are using CVS, this is going to break the access to prior versions in the source file history. Index: BulkRioLoader.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio/BulkRioLoader.java,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** BulkRioLoader.java 11 Mar 2007 11:43:32 -0000 1.12 --- BulkRioLoader.java 13 Apr 2007 15:02:34 -0000 1.13 *************** *** 62,67 **** import org.openrdf.rio.rdfxml.RdfXmlParser; ! import com.bigdata.objndx.IndexSegment; ! import com.bigdata.objndx.IndexSegmentFileStore; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.TripleStore; --- 62,67 ---- import org.openrdf.rio.rdfxml.RdfXmlParser; ! import com.bigdata.btree.IndexSegment; ! import com.bigdata.btree.IndexSegmentFileStore; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.TripleStore; Index: MultiThreadedPresortRioLoader.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio/MultiThreadedPresortRioLoader.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** MultiThreadedPresortRioLoader.java 6 Feb 2007 23:06:43 -0000 1.5 --- MultiThreadedPresortRioLoader.java 13 Apr 2007 15:02:34 -0000 1.6 *************** *** 58,62 **** import org.openrdf.rio.rdfxml.RdfXmlParser; ! import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.RdfKeyBuilder; --- 58,62 ---- import org.openrdf.rio.rdfxml.RdfXmlParser; ! import com.bigdata.btree.KeyBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.RdfKeyBuilder; Index: BulkLoaderBuffer.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio/BulkLoaderBuffer.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** BulkLoaderBuffer.java 27 Mar 2007 14:35:08 -0000 1.10 --- BulkLoaderBuffer.java 13 Apr 2007 15:02:34 -0000 1.11 *************** *** 55,65 **** import java.util.Arrays; import com.bigdata.io.ByteBufferOutputStream; - import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.IndexSegmentBuilder; - import com.bigdata.objndx.KeyBufferSerializer; - import com.bigdata.objndx.NodeSerializer; - import com.bigdata.objndx.RecordCompressor; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TripleStore; --- 55,65 ---- import java.util.Arrays; + import com.bigdata.btree.IIndex; + import com.bigdata.btree.IndexSegment; + import com.bigdata.btree.IndexSegmentBuilder; + import com.bigdata.btree.KeyBufferSerializer; + import com.bigdata.btree.NodeSerializer; + import com.bigdata.btree.RecordCompressor; import com.bigdata.io.ByteBufferOutputStream; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.TripleStore; Index: Buffer.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio/Buffer.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** Buffer.java 9 Feb 2007 21:19:25 -0000 1.7 --- Buffer.java 13 Apr 2007 15:02:34 -0000 1.8 *************** *** 57,62 **** import org.openrdf.model.Value; ! import com.bigdata.objndx.IEntryIterator; ! import com.bigdata.objndx.NoSuccessorException; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.RdfKeyBuilder; --- 57,62 ---- import org.openrdf.model.Value; ! import com.bigdata.btree.IEntryIterator; ! import com.bigdata.btree.NoSuccessorException; import com.bigdata.rdf.KeyOrder; import com.bigdata.rdf.RdfKeyBuilder; |