You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(414) |
Apr
(123) |
May
(448) |
Jun
(180) |
Jul
(17) |
Aug
(49) |
Sep
(3) |
Oct
(92) |
Nov
(101) |
Dec
(64) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(132) |
Feb
(230) |
Mar
(146) |
Apr
(146) |
May
|
Jun
|
Jul
(34) |
Aug
(4) |
Sep
(3) |
Oct
(10) |
Nov
(12) |
Dec
(24) |
2008 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(11) |
Nov
(4) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Bryan T. <tho...@us...> - 2007-04-04 16:52:20
|
Update of /cvsroot/cweb/bigdata/src/architecture In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv11927/src/architecture Added Files: performance.xls Log Message: Updated the benchmark for sustained IO, raw journal write rates, and both unisolated and isolated index write rates. --- NEW FILE: performance.xls --- (This appears to be a binary file; contents omitted.) |
From: Bryan T. <tho...@us...> - 2007-04-04 16:52:20
|
Update of /cvsroot/cweb/bigdata In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv11927 Modified Files: perf.txt Log Message: Updated the benchmark for sustained IO, raw journal write rates, and both unisolated and isolated index write rates. Index: perf.txt =================================================================== RCS file: /cvsroot/cweb/bigdata/perf.txt,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** perf.txt 8 Mar 2007 18:14:05 -0000 1.1 --- perf.txt 4 Apr 2007 16:52:17 -0000 1.2 *************** *** 89,90 **** --- 89,91 ---- disk. This suggests that the big win for TPS throughput is going to be group commit. + |
From: Bryan T. <tho...@us...> - 2007-04-04 16:52:20
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv11927/src/java/com/bigdata/objndx Modified Files: Counters.java AbstractBTree.java Log Message: Updated the benchmark for sustained IO, raw journal write rates, and both unisolated and isolated index write rates. Index: AbstractBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/AbstractBTree.java,v retrieving revision 1.21 retrieving revision 1.22 diff -C2 -d -r1.21 -r1.22 *** AbstractBTree.java 27 Mar 2007 14:34:22 -0000 1.21 --- AbstractBTree.java 4 Apr 2007 16:52:16 -0000 1.22 *************** *** 137,141 **** * Counters tracking various aspects of the btree. */ ! protected final Counters counters = new Counters(this); /** --- 137,141 ---- * Counters tracking various aspects of the btree. */ ! /*protected*/ public final Counters counters = new Counters(this); /** Index: Counters.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/Counters.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** Counters.java 6 Feb 2007 23:07:18 -0000 1.4 --- Counters.java 4 Apr 2007 16:52:16 -0000 1.5 *************** *** 8,16 **** * * @todo add nano timers and track storage used by the index. The goal is to ! * know how much of the time of the server is consumed by the index, ! * what percentage of the store is dedicated to the index, how ! * expensive it is to do some scan-based operations (merge down, ! * delete of transactional isolated persistent index), and evaluate ! * the buffer strategy by comparing accesses with IOs. */ public class Counters { --- 8,18 ---- * * @todo add nano timers and track storage used by the index. The goal is to ! * know how much of the time of the server is consumed by the index, what ! * percentage of the store is dedicated to the index, how expensive it is ! * to do some scan-based operations (merge down, delete of transactional ! * isolated persistent index), and evaluate the buffer strategy by ! * comparing accesses with IOs. ! * ! * @todo expose more of the counters using property access methods. */ public class Counters { *************** *** 47,50 **** --- 49,88 ---- long bytesRead = 0L; long bytesWritten = 0L; + + /** + * The #of nodes written on the backing store. + */ + final public int getNodesWritten() { + + return nodesWritten; + + } + + /** + * The #of leaves written on the backing store. + */ + final public int getLeavesWritten() { + + return leavesWritten; + + } + + /** + * The number of bytes read from the backing store. + */ + final public long getBytesRead() { + + return bytesRead; + + } + + /** + * The number of bytes written onto the backing store. + */ + final public long getBytesWritten() { + + return bytesWritten; + + } // @todo consider changing to logging so that the format will be nicer |
From: Bryan T. <tho...@us...> - 2007-04-04 16:52:20
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv11927/src/test/com/bigdata/journal Modified Files: BenchmarkJournalWriteRate.java Log Message: Updated the benchmark for sustained IO, raw journal write rates, and both unisolated and isolated index write rates. Index: BenchmarkJournalWriteRate.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/BenchmarkJournalWriteRate.java,v retrieving revision 1.14 retrieving revision 1.15 diff -C2 -d -r1.14 -r1.15 *** BenchmarkJournalWriteRate.java 22 Mar 2007 21:11:24 -0000 1.14 --- BenchmarkJournalWriteRate.java 4 Apr 2007 16:52:16 -0000 1.15 *************** *** 52,56 **** --- 52,58 ---- import java.io.RandomAccessFile; import java.nio.ByteBuffer; + import java.text.NumberFormat; import java.util.Properties; + import java.util.UUID; import junit.framework.Test; *************** *** 58,70 **** [...1123 lines suppressed...] *** 677,685 **** TestSuite suite = new TestSuite("Benchmark Journal Write Rates"); ! suite.addTestSuite( BenchmarkTransientJournal.class ); ! suite.addTestSuite( BenchmarkDirectJournal.class ); ! // suite.addTestSuite( BenchmarkMappedJournal.class ); ! suite.addTestSuite( BenchmarkDiskJournal.class ); ! suite.addTestSuite( BenchmarkSlotBasedOptimium.class ); suite.addTestSuite( BenchmarkBlockBasedOptimium.class ); suite.addTestSuite( BenchmarkSustainedTransferOptimium.class ); --- 812,820 ---- TestSuite suite = new TestSuite("Benchmark Journal Write Rates"); ! // suite.addTestSuite( BenchmarkTransientJournal.class ); ! // suite.addTestSuite( BenchmarkDirectJournal.class ); ! //// suite.addTestSuite( BenchmarkMappedJournal.class ); ! // suite.addTestSuite( BenchmarkDiskJournal.class ); ! suite.addTestSuite( BenchmarkSmallRecordOptimium.class ); suite.addTestSuite( BenchmarkBlockBasedOptimium.class ); suite.addTestSuite( BenchmarkSustainedTransferOptimium.class ); |
From: Bryan T. <tho...@us...> - 2007-04-04 16:52:19
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv11927/src/java/com/bigdata/journal Modified Files: AbstractBufferStrategy.java Options.java AbstractJournal.java Log Message: Updated the benchmark for sustained IO, raw journal write rates, and both unisolated and isolated index write rates. Index: AbstractBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -d -r1.15 -r1.16 *** AbstractBufferStrategy.java 29 Mar 2007 17:01:33 -0000 1.15 --- AbstractBufferStrategy.java 4 Apr 2007 16:52:16 -0000 1.16 *************** *** 149,155 **** if( maximumExtent != 0L && required > maximumExtent ) { ! // Would exceed the maximum extent (iff a hard limit). ! log.error("Would exceed maximumExtent="+maximumExtent); return false; --- 149,160 ---- if( maximumExtent != 0L && required > maximumExtent ) { ! /* ! * Would exceed the maximum extent (iff a hard limit). ! * ! * Note: this will show up for transactions that whose write set ! * overflows the in-memory buffer onto the disk. ! */ ! log.warn("Would exceed maximumExtent="+maximumExtent); return false; Index: AbstractJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractJournal.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** AbstractJournal.java 29 Mar 2007 17:01:33 -0000 1.9 --- AbstractJournal.java 4 Apr 2007 16:52:16 -0000 1.10 *************** *** 132,136 **** * solution resources would be migrated to other hosts to reduce the total * sustained resource load).</li> ! * <li> Concurrent load for RDFS w/o rollback.</li> * <li> Network api (data service).</li> * <li> Group commit for higher transaction throughput.<br> --- 132,151 ---- * solution resources would be migrated to other hosts to reduce the total * sustained resource load).</li> ! * <li> Concurrent load for RDFS w/o rollback. There should also be an ! * unisolated read mode that does "read-behind", i.e., reading from the last ! * committed state on the store. The purpose of this is to permit fully ! * concurrent readers with the writer in an unisolated mode. If concurrent ! * readers actually read from the same btree instance as the writer then ! * exceptions would arise from concurrent modification. This problem is ! * trivially avoided maintaining a distinction between a concurrent read-only ! * btree emerging from the last committed state and a single non-concurrent ! * access btree for the writer. In this manner readers may read from the ! * unisolated state of the database with concurrent modification -- to another ! * instance of the btree. Once the writer commits, any new readers will read ! * from its committed state and the older btree objects will be flushed from ! * cache soon after their readers terminate -- thereby providing a consistent ! * view to a reader (or the reader could always switch to read from the then ! * current btree after a commit - just like the distinction between an isolated ! * reader and a read-committed reader).</li> * <li> Network api (data service).</li> * <li> Group commit for higher transaction throughput.<br> Index: Options.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Options.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** Options.java 22 Mar 2007 21:11:25 -0000 1.11 --- Options.java 4 Apr 2007 16:52:16 -0000 1.12 *************** *** 133,139 **** * mode that writes through synchronously to stable storage (default * <code>No</code>). This option does NOT effect the stability of the ! * data on disk but may be used to tweak the file system buffering. * * @see #FORCE_ON_COMMIT, which controls the stability of the data on disk. * @see ForceEnum */ --- 133,141 ---- * mode that writes through synchronously to stable storage (default * <code>No</code>). This option does NOT effect the stability of the ! * data on disk but may be used to tweak the file system buffering (forcing ! * writes is generally MUCH slower and is turned off by default). * * @see #FORCE_ON_COMMIT, which controls the stability of the data on disk. + * @see #DEFAULT_FORCE_WRITES * @see ForceEnum */ |
From: Bryan T. <tho...@us...> - 2007-04-04 16:51:26
|
Update of /cvsroot/cweb/bigdata-rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv11552 Modified Files: .cvsignore Log Message: Updated the benchmark for sustained IO, raw journal write rates, and both unisolated and isolated index write rates. Index: .cvsignore =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/.cvsignore,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** .cvsignore 13 Feb 2007 23:02:15 -0000 1.4 --- .cvsignore 4 Apr 2007 16:51:23 -0000 1.5 *************** *** 22,23 **** --- 22,26 ---- TestMetrics-metrics-documents-1partition-10k-OOM.csv TestMetrics-metrics-smallDocuments-noPartitions.csv + smallDocuments + test_loadFile_presortRioLoader + TestMetrics-metrics* |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:51
|
Update of /cvsroot/cweb/bigdata-rdf/src/resources/logging In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16595/src/resources/logging Modified Files: log4j.properties Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: log4j.properties =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/resources/logging/log4j.properties,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** log4j.properties 17 Feb 2007 23:15:26 -0000 1.2 --- log4j.properties 29 Mar 2007 17:01:47 -0000 1.3 *************** *** 13,16 **** --- 13,17 ---- log4j.logger.com.bigdata.rdf.TripleStore=DEBUG log4j.logger.junit.framework.Test=INFO + log4j.logger.com.bigdata.journal.ResourceManager=INFO, dest1, A2 log4j.appender.dest1=org.apache.log4j.ConsoleAppender *************** *** 20,21 **** --- 21,32 ---- #log4j.appender.dest1.layout.ConversionPattern=%-4r(%d) [%t] %-5p %c(%l:%M) %x - %m%n log4j.appender.dest1.layout.ConversionPattern=%-5p: %l: %m%n + + # A2 is set to be a FileAppender. + log4j.appender.A2=org.apache.log4j.FileAppender + log4j.appender.A2.Threshold=DEBUG + log4j.appender.A2.File=ResourceManager.log + log4j.appender.A2.Append=true + + # A2 uses PatternLayout. + log4j.appender.A2.layout=org.apache.log4j.PatternLayout + log4j.appender.A2.layout.ConversionPattern=%5p [%t] %l %d - %m%n |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:51
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16595/src/java/com/bigdata/rdf Modified Files: TripleStore.java RdfKeyBuilder.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: RdfKeyBuilder.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/RdfKeyBuilder.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** RdfKeyBuilder.java 27 Mar 2007 14:35:08 -0000 1.8 --- RdfKeyBuilder.java 29 Mar 2007 17:01:47 -0000 1.9 *************** *** 177,180 **** --- 177,181 ---- keyBuilder.reset().append(CODE_LIT); + return appendString(text).getKey(); *************** *** 186,190 **** --- 187,193 ---- keyBuilder.reset().append(CODE_LCL); + appendString(languageCode).appendNul(); + return appendString(text).getKey(); *************** *** 209,212 **** --- 212,216 ---- keyBuilder.reset().append(CODE_DTL); + appendString(datatype).appendNul(); *************** *** 241,244 **** --- 245,252 ---- } + /** + * The key corresponding to the start of the literals section of the + * terms index. + */ public byte[] litStartKey() { *************** *** 247,250 **** --- 255,262 ---- } + /** + * The key corresponding to the first key after the literals section of the + * terms index. + */ public byte[] litEndKey() { Index: TripleStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/TripleStore.java,v retrieving revision 1.25 retrieving revision 1.26 diff -C2 -d -r1.25 -r1.26 *** TripleStore.java 27 Mar 2007 17:11:48 -0000 1.25 --- TripleStore.java 29 Mar 2007 17:01:47 -0000 1.26 *************** *** 202,206 **** * @version $Id$ */ ! public class TripleStore extends MasterJournal { /** --- 202,206 ---- * @version $Id$ */ ! public class TripleStore extends /*Master*/Journal { /** *************** *** 1066,1073 **** } ! public void overflow() { ! ! System.err.println("*** Overflow *** "); // the current state. final long counter = getCounter().getCounter(); --- 1066,1097 ---- } ! // /** ! // * FIXME The logic here should be triggered IFF the journal actually ! // * overflows (that is, iff a new journal is opened and the old journal ! // * becomes read-only). ! // * <p> ! // */ ! // public boolean overflow() { ! // ! // // invoke the base behavior on the super class. ! // boolean ret = super.overflow(); ! // ! // return ret; ! // ! // } + /** + * @todo The commit is required to make the new counter restart safe by + * placing an address for it into its root slot. + * <p> + * rather than having multiple commits during overflow, we should + * create a method to which control is handed before and after the + * overflow event processing in the base class which provides the + * opportunity for such maintenance events. we could then just setup + * the new counter and let the overflow handle the commit. + * + */ + protected Object willOverflow() { + // the current state. final long counter = getCounter().getCounter(); *************** *** 1080,1086 **** ndx_osp = null; this.counter = null; ! // invoke the base behavior on the super class. ! super.overflow(); // create a new counter that will be persisted on the new slave journal. --- 1104,1115 ---- ndx_osp = null; this.counter = null; + + return counter; + + } + + protected void didOverflow(Object state) { ! final long counter = (Long)state; // create a new counter that will be persisted on the new slave journal. *************** *** 1090,1102 **** setCommitter( ROOT_COUNTER, this.counter ); - /* - * @todo this commit is required to make the new counter restart safe by - * placing an address for it into its root slot. rather than having - * multiple commits during overflow, we should create a method to which - * control is handed before and after the overflow event processing in - * the base class which provides the opportunity for such maintenance - * events. we could then just setup the new counter and let the overflow - * handle the commit. - */ commit(); --- 1119,1122 ---- |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:51
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/serializers In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16595/src/java/com/bigdata/rdf/serializers Modified Files: RdfValueSerializer.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: RdfValueSerializer.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/serializers/RdfValueSerializer.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** RdfValueSerializer.java 9 Feb 2007 20:18:56 -0000 1.1 --- RdfValueSerializer.java 29 Mar 2007 17:01:47 -0000 1.2 *************** *** 51,54 **** --- 51,55 ---- import org.openrdf.model.Value; + import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IValueSerializer; import com.bigdata.rdf.RdfKeyBuilder; *************** *** 70,75 **** * optimization, UTF compression is just slightly slower on write. * ! * @todo use a per-leaf dictionary to factor out common strings as codes, ! * e.g., Hamming codes. */ public class RdfValueSerializer implements IValueSerializer { --- 71,79 ---- * optimization, UTF compression is just slightly slower on write. * ! * @todo use a per-leaf dictionary to factor out common strings as codes, e.g., ! * Hamming codes. This should be done in the basic btree package. ! * ! * @todo pass reference to the {@link IIndex} object so that we can version the ! * per value serialization? */ public class RdfValueSerializer implements IValueSerializer { |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:51
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/metrics In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16595/src/test/com/bigdata/rdf/metrics Modified Files: TestMetrics.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: TestMetrics.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/metrics/TestMetrics.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestMetrics.java 21 Feb 2007 20:16:44 -0000 1.3 --- TestMetrics.java 29 Mar 2007 17:01:47 -0000 1.4 *************** *** 62,65 **** --- 62,66 ---- import org.CognitiveWeb.util.PropertyUtil; + import com.bigdata.rawstore.Bytes; import com.bigdata.rdf.TripleStore.LoadStats; *************** *** 340,343 **** --- 341,347 ---- * <dd>The total #of literals in the store or zero if not using the GOM * SAIL.</dd> + * <dt>totalMemory</dt> + * <dd>The total VM memory (in Megabytes) as reported by + * {@link Runtime#totalMemory()}.</dd> * <dt>error</dt> * <dd>This column is <code>Ok</code> if there was no error. Otherwise *************** *** 355,365 **** * * <pre> ! * triplesInStore - #of triples in the repository ! * inferencesInStore - #of inferences in the repository ! * proofsInStore - #of proofs in the repository. ! * urisInStore - #of URIs in the repository. ! * bnodesInStore - #of blank nodes in the repository. ! * literalsInStore - #of literals in the repository. ! * sizeOnDisk - #of bytes, MB, GB on disk for the repository. * </pre> * --- 359,369 ---- * * <pre> ! * triplesInStore - #of triples in the repository ! * inferencesInStore - #of inferences in the repository ! * proofsInStore - #of proofs in the repository. ! * urisInStore - #of URIs in the repository. ! * bnodesInStore - #of blank nodes in the repository. ! * literalsInStore - #of literals in the repository. ! * sizeOnDisk - #of bytes, MB, GB on disk for the repository. * </pre> * *************** *** 446,450 **** metricsWriter.write(""+t.bnodeCount1+", "); metricsWriter.write(""+t.literalCount1+", "); ! // error metricsWriter.write(""+(t.error == null?"Ok":t.error.getMessage())+", "); --- 450,456 ---- metricsWriter.write(""+t.bnodeCount1+", "); metricsWriter.write(""+t.literalCount1+", "); ! ! // total VM memory (in MB). ! metricsWriter.write(""+Runtime.getRuntime().totalMemory()/Bytes.megabyte+", "); // error metricsWriter.write(""+(t.error == null?"Ok":t.error.getMessage())+", "); *************** *** 537,540 **** --- 543,547 ---- metricsWriter.write("bnodesInStore, "); metricsWriter.write("literalsInStore, "); + metricsWriter.write("totalMemory, "); metricsWriter.write("error, "); metricsWriter.write("filename\n"); *************** *** 1060,1066 **** } ! testMetrics.loadFiles(); ! ! testMetrics.tearDown(); } --- 1067,1079 ---- } ! try { ! ! testMetrics.loadFiles(); ! ! } finally { ! ! testMetrics.tearDown(); ! ! } } |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:50
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16595/src/test/com/bigdata/rdf Modified Files: AbstractTripleStoreTestCase.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: AbstractTripleStoreTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/AbstractTripleStoreTestCase.java,v retrieving revision 1.13 retrieving revision 1.14 diff -C2 -d -r1.13 -r1.14 *** AbstractTripleStoreTestCase.java 22 Mar 2007 21:11:29 -0000 1.13 --- AbstractTripleStoreTestCase.java 29 Mar 2007 17:01:47 -0000 1.14 *************** *** 87,91 **** // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); ! properties.setProperty(Options.BUFFER_MODE, getBufferMode().toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); --- 87,93 ---- // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); ! if(properties.getProperty(Options.BUFFER_MODE)==null) { ! properties.setProperty(Options.BUFFER_MODE, getBufferMode().toString()); ! } // properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); *************** *** 163,169 **** } public void tearDown() { ! if(store.isOpen()) store.closeAndDelete(); } --- 165,178 ---- } + /** + * If the store is open, then closes and deletes the store. + */ public void tearDown() { ! if(store.isOpen()) { ! ! store.closeAndDelete(); ! ! } } |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:39
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16125/src/java/com/bigdata/scaleup Modified Files: PartitionedIndexView.java PartitionMetadata.java MasterJournal.java SegmentMetadata.java SlaveJournal.java AbstractPartitionTask.java IResourceMetadata.java IsolatablePartitionedIndexView.java JournalMetadata.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: IsolatablePartitionedIndexView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IsolatablePartitionedIndexView.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** IsolatablePartitionedIndexView.java 12 Mar 2007 18:06:12 -0000 1.2 --- IsolatablePartitionedIndexView.java 29 Mar 2007 17:01:33 -0000 1.3 *************** *** 49,67 **** import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.IEntryIterator; /** * A {@link PartitionedIndexView} that supports transactions and deletion ! * markers. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ - * - * FIXME implement; support processing of delete markers. They can exist in the mutable - * btree and in index segments that are not either a clean first eviction or a - * full compacting merge (e.g., they can still exist in a compacting merge if - * there are other index segments or btrees that are part of a partition but are - * not partitipating in the compacting merge). */ public class IsolatablePartitionedIndexView extends PartitionedIndexView implements IIsolatableIndex { --- 49,93 ---- import com.bigdata.isolation.IIsolatableIndex; + import com.bigdata.isolation.IsolatableFusedView; import com.bigdata.isolation.UnisolatedBTree; + import com.bigdata.objndx.IBatchBTree; import com.bigdata.objndx.IEntryIterator; + import com.bigdata.objndx.ReadOnlyFusedView; /** * A {@link PartitionedIndexView} that supports transactions and deletion ! * markers. Write operations are passed through to the base class, which in turn ! * delegates them to the {@link UnisolatedBTree} identified to the constructor. ! * Read operations understand deletion markers. Processing deletion markers ! * requires that the source(s) for an index partition view are read in order ! * from the most recent (the mutable btree that is absorbing writes for the ! * index partition) to the earliest historical resource. The first entry for the ! * key in any source is the value that will be reported on a read. If the entry ! * is deleted, then the read will report that no entry exists for that key. ! * <p> ! * Note that deletion markers can exist in both the mutable btree absorbing ! * writes and in historical journals and index segments having data for the ! * partition view. Deletion markers are expunged from index segments only by a ! * full compacting merge of all index segments having life data for the ! * partition. ! * <p> ! * Implementation note: both the write operations and the {@link IBatchBTree} ! * operations are inherited from the base class. Only non-batch read operations ! * are overriden by this class. ! * ! * FIXME implement; support processing of delete markers - basically they have ! * to be processed on read so that a delete on the mutable btree overrides an ! * historical value, and a deletion marker in a more recent index segment ! * overrides a deletion marker in an earlier index segment. Deletion markers can ! * exist in the both mutable btree and in index segments that are not either a ! * clean first eviction or a full compacting merge (e.g., they can still exist ! * in a compacting merge if there are other index segments or btrees that are ! * part of a partition but are not partitipating in the compacting merge). ! * ! * @see IsolatableFusedView ! * @see ReadOnlyFusedView * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class IsolatablePartitionedIndexView extends PartitionedIndexView implements IIsolatableIndex { *************** *** 69,126 **** /** * @param btree * @param mdi */ public IsolatablePartitionedIndexView(UnisolatedBTree btree, MetadataIndex mdi) { super(btree, mdi); - // throw new UnsupportedOperationException(); - } - - public boolean contains(byte[] key) { - // TODO Auto-generated method stub - return false; - } - - public Object insert(Object key, Object value) { - // TODO Auto-generated method stub - return null; - } - - public Object lookup(Object key) { - // TODO Auto-generated method stub - return null; - } - - public int rangeCount(byte[] fromKey, byte[] toKey) { - // TODO Auto-generated method stub - return 0; - } - - public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { - // TODO Auto-generated method stub - return null; - } - - public Object remove(Object key) { - // TODO Auto-generated method stub - return null; - } - - public void contains(int ntuples, byte[][] keys, boolean[] contains) { - // TODO Auto-generated method stub - - } - - public void insert(int ntuples, byte[][] keys, Object[] values) { - // TODO Auto-generated method stub - - } - - public void lookup(int ntuples, byte[][] keys, Object[] values) { - // TODO Auto-generated method stub - - } - - public void remove(int ntuples, byte[][] keys, Object[] values) { - // TODO Auto-generated method stub } --- 95,105 ---- /** * @param btree + * The btree that will absorb writes for the index partitions. * @param mdi + * The metadata index. */ public IsolatablePartitionedIndexView(UnisolatedBTree btree, MetadataIndex mdi) { + super(btree, mdi); } Index: JournalMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/JournalMetadata.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** JournalMetadata.java 27 Mar 2007 17:11:41 -0000 1.2 --- JournalMetadata.java 29 Mar 2007 17:01:33 -0000 1.3 *************** *** 44,48 **** package com.bigdata.scaleup; - import java.io.File; import java.util.UUID; --- 44,47 ---- *************** *** 61,85 **** protected final String filename; protected final ResourceState state; protected final UUID uuid; ! public File getFile() { ! return new File(filename); } /** ! * Always returns ZERO (0L) since we can not accurately estimate the #of ! * bytes on the journal dedicated to a given partition of a named index. */ ! public long size() { ! return 0L; } ! public ResourceState state() { return state; } ! public UUID getUUID() { return uuid; } --- 60,108 ---- protected final String filename; + protected final long nbytes; protected final ResourceState state; protected final UUID uuid; ! public final boolean isIndexSegment() { ! ! return false; ! ! } ! ! public final boolean isJournal() { ! ! return true; ! ! } ! ! public final String getFile() { ! ! return filename; ! } /** ! * Note: this value is typically zero (0L) since we can not accurately ! * estimate the #of bytes on the journal dedicated to a given partition of a ! * named index. The value is originally set to zero (0L) by the ! * {@link JournalMetadata#JournalMetadata(Journal, ResourceState)} ! * constructor. */ ! public final long size() { ! ! return nbytes; ! } ! public final ResourceState state() { ! return state; + } ! public final UUID getUUID() { ! return uuid; + } *************** *** 94,97 **** --- 117,126 ---- this.filename = journal.getFile().toString(); + /* + * Note: 0L since we can not easily estimate the #of bytes on the + * journal that are dedicated to an index partition. + */ + this.nbytes = 0L; + this.state = state; *************** *** 100,102 **** --- 129,149 ---- } + public JournalMetadata(String file, long nbytes, ResourceState state, UUID uuid) { + + if(file == null) throw new IllegalArgumentException(); + + if(state == null) throw new IllegalArgumentException(); + + if(uuid == null) throw new IllegalArgumentException(); + + this.filename = file; + + this.nbytes = nbytes; + + this.state = state; + + this.uuid = uuid; + + } + } Index: IResourceMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IResourceMetadata.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** IResourceMetadata.java 27 Mar 2007 17:11:41 -0000 1.2 --- IResourceMetadata.java 29 Mar 2007 17:01:33 -0000 1.3 *************** *** 60,66 **** /** ! * The store file. */ ! public File getFile(); /** --- 60,80 ---- /** ! * True iff this resource is an {@link IndexSegment}. Each ! * {@link IndexSegment} contains historical read-only data for exactly one ! * partition of a scale-out index. */ ! public boolean isIndexSegment(); ! ! /** ! * True iff this resource is a {@link Journal}. When the resource is a ! * {@link Journal}, there will be a named mutable btree on the journal that ! * is absorbing writes for one or more index partition of a scale-out index. ! */ ! public boolean isJournal(); ! ! /** ! * The name of the file containing the resource. ! */ ! public String getFile(); /** Index: AbstractPartitionTask.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/AbstractPartitionTask.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** AbstractPartitionTask.java 27 Mar 2007 17:11:41 -0000 1.4 --- AbstractPartitionTask.java 29 Mar 2007 17:01:33 -0000 1.5 *************** *** 48,52 **** import java.util.concurrent.Executors; - import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.isolation.Value; --- 48,51 ---- *************** *** 56,60 **** import com.bigdata.objndx.IndexSegmentBuilder; import com.bigdata.objndx.IndexSegmentMerger; - import com.bigdata.objndx.IndexSegmentMetadata; import com.bigdata.objndx.RecordCompressor; import com.bigdata.objndx.IndexSegmentMerger.MergedEntryIterator; --- 55,58 ---- *************** *** 90,98 **** * @todo parameterize useChecksum, recordCompressor. * - * @todo assiging sequential segment identifiers may impose an unnecessary - * message overhead since we can just use the temporary file mechanism - * and the inspect the {@link IndexSegmentMetadata} to learn more - * about a given store file. - * * @todo try performance with and without checksums and with and without * record compression. --- 88,91 ---- *************** *** 129,138 **** */ protected final RecordCompressor recordCompressor = null; ! ! /** ! * The serializer used by all {@link IIsolatableIndex}s. ! */ ! static protected final IValueSerializer valSer = Value.Serializer.INSTANCE; ! /** * --- 122,126 ---- */ protected final RecordCompressor recordCompressor = null; ! /** * *************** *** 220,224 **** private final IResourceMetadata src; - private final int segId; /** --- 208,211 ---- *************** *** 227,236 **** * The source for the build operation. Only those entries in * the described key range will be used. - * @param segId - * The output segment identifier. */ public BuildTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, int partId, ! byte[] fromKey, byte[] toKey, IResourceMetadata src, int segId) { super(master, name, indexUUID, branchingFactor, errorRate, partId, --- 214,221 ---- * The source for the build operation. Only those entries in * the described key range will be used. */ public BuildTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, int partId, ! byte[] fromKey, byte[] toKey, IResourceMetadata src) { super(master, name, indexUUID, branchingFactor, errorRate, partId, *************** *** 239,244 **** this.src = src; - this.segId = segId; - } --- 224,227 ---- *************** *** 255,264 **** AbstractBTree src = master.getIndex(name,this.src); ! File outFile = master.getSegmentFile(name, partId, segId); IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, master.tmpDir, src.rangeCount(fromKey, toKey), src .rangeIterator(fromKey, toKey), branchingFactor, ! valSer, useChecksum, recordCompressor, errorRate, indexUUID); IResourceMetadata[] resources = new SegmentMetadata[] { new SegmentMetadata( --- 238,248 ---- AbstractBTree src = master.getIndex(name,this.src); ! File outFile = master.getSegmentFile(name, partId); IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, master.tmpDir, src.rangeCount(fromKey, toKey), src .rangeIterator(fromKey, toKey), branchingFactor, ! src.getNodeSerializer().getValueSerializer(), useChecksum, ! recordCompressor, errorRate, indexUUID); IResourceMetadata[] resources = new SegmentMetadata[] { new SegmentMetadata( *************** *** 282,286 **** abstract static class AbstractMergeTask extends AbstractPartitionTask { - protected final int segId; protected final boolean fullCompactingMerge; --- 266,269 ---- *************** *** 296,300 **** protected AbstractMergeTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, ! int partId, byte[] fromKey, byte[] toKey, int segId, boolean fullCompactingMerge) { --- 279,283 ---- protected AbstractMergeTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, ! int partId, byte[] fromKey, byte[] toKey, boolean fullCompactingMerge) { *************** *** 302,307 **** fromKey, toKey); - this.segId = segId; - this.fullCompactingMerge = fullCompactingMerge; --- 285,288 ---- *************** *** 326,330 **** // output file for the merged segment. ! File outFile = master.getSegmentFile(name, partId, segId); IResourceMetadata[] resources = getResources(); --- 307,311 ---- // output file for the merged segment. ! File outFile = master.getSegmentFile(name, partId); IResourceMetadata[] resources = getResources(); *************** *** 339,347 **** } // merge the data from the btree on the slave and the index // segment. MergedLeafIterator mergeItr = new IndexSegmentMerger( tmpFileBranchingFactor, srcs).merge(); ! // build the merged index segment. IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, --- 320,331 ---- } + final IValueSerializer valSer = srcs[0].getNodeSerializer() + .getValueSerializer(); + // merge the data from the btree on the slave and the index // segment. MergedLeafIterator mergeItr = new IndexSegmentMerger( tmpFileBranchingFactor, srcs).merge(); ! // build the merged index segment. IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, *************** *** 383,390 **** // new segment definitions. ! final SegmentMetadata[] newSegs = new SegmentMetadata[2]; // assume only the last segment is live. ! final SegmentMetadata oldSeg = pmd.segs[pmd.segs.length-1]; newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, --- 367,374 ---- // new segment definitions. ! final IResourceMetadata[] newSegs = new IResourceMetadata[2]; // assume only the last segment is live. ! final SegmentMetadata oldSeg = (SegmentMetadata)pmd.resources[pmd.resources.length-1]; newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, *************** *** 394,398 **** .length(), ResourceState.Live, builder.segmentUUID); ! mdi.put(fromKey, new PartitionMetadata(0, segId + 1, newSegs)); return null; --- 378,382 ---- .length(), ResourceState.Live, builder.segmentUUID); ! mdi.put(fromKey, new PartitionMetadata(0, pmd.dataServices, newSegs)); return null; *************** *** 436,444 **** public MergeTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, int partId, ! byte[] fromKey, byte[] toKey, IResourceMetadata[] resources, ! int segId) { super(master, name, indexUUID, branchingFactor, errorRate, partId, ! fromKey, toKey, segId, false); this.resources = resources; --- 420,428 ---- public MergeTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, int partId, ! byte[] fromKey, byte[] toKey, IResourceMetadata[] resources ! ) { super(master, name, indexUUID, branchingFactor, errorRate, partId, ! fromKey, toKey, false); this.resources = resources; *************** *** 490,497 **** public FullMergeTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, int partId, ! byte[] fromKey, byte[] toKey, long commitTime, int segId) { super(master, name, indexUUID, branchingFactor, errorRate, partId, ! fromKey, toKey, segId, true); this.commitTime = commitTime; --- 474,481 ---- public FullMergeTask(MasterJournal master, String name, UUID indexUUID, int branchingFactor, double errorRate, int partId, ! byte[] fromKey, byte[] toKey, long commitTime) { super(master, name, indexUUID, branchingFactor, errorRate, partId, ! fromKey, toKey, true); this.commitTime = commitTime; Index: SlaveJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/SlaveJournal.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** SlaveJournal.java 27 Mar 2007 14:34:22 -0000 1.6 --- SlaveJournal.java 29 Mar 2007 17:01:33 -0000 1.7 *************** *** 50,57 **** import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.journal.IJournal; import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr; - import com.bigdata.journal.ResourceManager; import com.bigdata.objndx.BTree; import com.bigdata.objndx.IEntryIterator; --- 50,55 ---- *************** *** 61,69 **** /** ! * Class delegates the {@link #overflow()} event to a master ! * {@link IJournal}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class SlaveJournal extends Journal { --- 59,69 ---- /** ! * Class delegates the {@link #overflow()} event to a {@link MasterJournal}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ + * + * FIXME refactor the metadata index so that it may be run as an embedded + * process or remote service. */ public class SlaveJournal extends Journal { *************** *** 106,115 **** * The overflow event is delegated to the master. */ ! public void overflow() { // handles event reporting. super.overflow(); ! master.overflow(); } --- 106,115 ---- * The overflow event is delegated to the master. */ ! public boolean overflow() { // handles event reporting. super.overflow(); ! return master.overflow(); } *************** *** 233,247 **** /* ! * @todo the assigned random UUID for the metadata index must be used by ! * all B+Tree objects having data for the metadata index so once we ! * support partitions in the metadata index itself this UUID must be ! * propagated to all of those downstream objects. */ MetadataIndex mdi = new MetadataIndex(this, ! BTree.DEFAULT_BRANCHING_FACTOR, UUID.randomUUID(), btree ! .getIndexUUID(), name); ! // create the initial partition which can accept any key. ! mdi.put(new byte[]{}, new PartitionMetadata(0)); // add to the persistent name map. --- 233,264 ---- /* ! * Note: there are two UUIDs here - the UUID for the metadata index ! * describing the partitions of the named scale-out index and the UUID ! * of the named scale-out index. The metadata index UUID MUST be used by ! * all B+Tree objects having data for the metadata index (its mutable ! * btrees on journals and its index segments) while the managed named ! * index UUID MUST be used by all B+Tree objects having data for the ! * named index (its mutable btrees on journals and its index segments). */ + + final UUID metadataIndexUUID = UUID.randomUUID(); + + final UUID managedIndexUUID = btree.getIndexUUID(); + MetadataIndex mdi = new MetadataIndex(this, ! BTree.DEFAULT_BRANCHING_FACTOR, metadataIndexUUID, ! managedIndexUUID, name); ! /* ! * Create the initial partition which can accept any key. ! * ! * @todo specify the DataSerivce(s) that will accept writes for this ! * index partition. This should be done as part of refactoring the ! * metadata index into a first level service. ! */ ! ! final UUID[] dataServices = new UUID[]{}; ! ! mdi.put(new byte[]{}, new PartitionMetadata(0, dataServices )); // add to the persistent name map. *************** *** 363,376 **** final PartitionMetadata pmd = (PartitionMetadata) itr.next(); ! for (int i = 0; i < pmd.segs.length; i++) { ! SegmentMetadata smd = pmd.segs[i]; ! File file = new File(smd.filename); ! if (file.exists() && !file.delete()) { ! log.warn("Could not remove file: " ! + file.getAbsolutePath()); } --- 380,397 ---- final PartitionMetadata pmd = (PartitionMetadata) itr.next(); ! for (int i = 0; i < pmd.resources.length; i++) { ! IResourceMetadata rmd = pmd.resources[i]; ! if (rmd.isIndexSegment()) { ! File file = new File(rmd.getFile()); ! if (file.exists() && !file.delete()) { ! ! log.warn("Could not remove file: " ! + file.getAbsolutePath()); ! ! } } Index: SegmentMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/SegmentMetadata.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** SegmentMetadata.java 27 Mar 2007 17:11:41 -0000 1.4 --- SegmentMetadata.java 29 Mar 2007 17:01:33 -0000 1.5 *************** *** 44,48 **** package com.bigdata.scaleup; - import java.io.File; import java.util.UUID; --- 44,47 ---- *************** *** 54,57 **** --- 53,58 ---- * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ + * + * @todo make fields protected/private. */ public class SegmentMetadata implements IResourceMetadata { *************** *** 74,77 **** --- 75,90 ---- final public UUID uuid; + public final boolean isIndexSegment() { + + return true; + + } + + public final boolean isJournal() { + + return false; + + } + public SegmentMetadata(String filename,long nbytes,ResourceState state, UUID uuid ) { *************** *** 101,106 **** } ! public File getFile() { ! return new File(filename); } --- 114,119 ---- } ! public String getFile() { ! return filename; } Index: PartitionedIndexView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionedIndexView.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** PartitionedIndexView.java 27 Mar 2007 14:34:22 -0000 1.2 --- PartitionedIndexView.java 29 Mar 2007 17:01:33 -0000 1.3 *************** *** 179,187 **** int n = 0; ! for(int i=0; i<pmd.segs.length; i++) { ! if(pmd.segs[i].state != ResourceState.Live) continue; ! segs[n++] = (IndexSegment) master.getIndex(getName(), pmd.segs[i]); } --- 179,187 ---- int n = 0; ! for(int i=0; i<pmd.resources.length; i++) { ! if(pmd.resources[i].state() != ResourceState.Live) continue; ! segs[n++] = (IndexSegment) master.getIndex(getName(), pmd.resources[i]); } *************** *** 312,318 **** for (int i = 0; i < resources.length; i++) { ! SegmentMetadata seg = pmd.segs[i]; ! if (seg.state != ResourceState.Live) continue; --- 312,318 ---- for (int i = 0; i < resources.length; i++) { ! IResourceMetadata seg = pmd.resources[i]; ! if (seg.state() != ResourceState.Live) continue; Index: PartitionMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionMetadata.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** PartitionMetadata.java 27 Mar 2007 17:11:41 -0000 1.4 --- PartitionMetadata.java 29 Mar 2007 17:01:33 -0000 1.5 *************** *** 50,53 **** --- 50,55 ---- import java.util.UUID; + import com.bigdata.isolation.UnisolatedBTree; + import com.bigdata.journal.Journal; import com.bigdata.objndx.IValueSerializer; import com.bigdata.objndx.IndexSegment; *************** *** 57,89 **** * partition. * ! * FIXME add ordered UUID[] of the data services on which the index partition ! * has been mapped. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ ! public class PartitionMetadata { /** * The unique partition identifier. */ ! final int partId; ! /** ! * The next unique within partition segment identifier to be assigned. */ ! final int nextSegId; ! /** ! * Zero or more files containing {@link IndexSegment}s holding live ! * data for this partition. The entries in the array reflect the ! * creation time of the index segments. The earliest segment is listed ! * first. The most recently created segment is listed last. */ ! final SegmentMetadata[] segs; ! public PartitionMetadata(int partId) { ! this(partId, 0, new SegmentMetadata[] {}); } --- 59,101 ---- * partition. * ! * @todo provide a persistent event log or just integrate the state changes over ! * the historical states of the partition description in the metadata ! * index? ! * ! * @todo aggregate resource load statistics. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ ! public class PartitionMetadata /*implements Externalizable*/ { /** * The unique partition identifier. */ ! final protected int partId; ! /** ! * The ordered list of data services on which data for this partition will ! * be written and from which data for this partition may be read. */ ! final protected UUID[] dataServices; ! /** ! * Zero or more files containing {@link Journal}s or {@link IndexSegment}s ! * holding live data for this partition. The entries in the array reflect ! * the creation time of the index segments. The earliest segment is listed ! * first. The most recently created segment is listed last. Only the ! * {@link ResourceState#Live} resources must be read in order to provide a ! * consistent view of the data for the index partition. ! * {@link ResourceState#Dead} resources will eventually be scheduled for ! * restart-safe deletion. ! * ! * @see ResourceState */ ! final protected IResourceMetadata[] resources; ! public PartitionMetadata(int partId, UUID[] dataServices ) { ! this(partId, dataServices, new IResourceMetadata[] {}); } *************** *** 94,112 **** * The unique partition identifier assigned by the * {@link MetadataIndex}. ! * @param segs ! * A description of each {@link IndexSegment} associated with ! * that partition. */ ! public PartitionMetadata(int partId, int nextSegId, SegmentMetadata[] segs) { ! this.partId = partId; ! this.nextSegId = nextSegId; ! this.segs = segs; } /** * The #of live index segments (those having data that must be included * to construct a fused view representing the current state of the --- 106,162 ---- * The unique partition identifier assigned by the * {@link MetadataIndex}. ! * @param dataServices ! * The ordered array of data service identifiers on which data ! * for this partition will be written and from which data for ! * this partition may be read. ! * @param resources ! * A description of each {@link Journal} or {@link IndexSegment} ! * resource associated with that partition. */ ! public PartitionMetadata(int partId, UUID[] dataServices, ! IResourceMetadata[] resources) { ! if (dataServices == null) ! throw new IllegalArgumentException(); ! if (resources == null) ! throw new IllegalArgumentException(); ! this.partId = partId; ! ! this.dataServices = dataServices; ! ! this.resources = resources; } /** + * The #of data services on which the data for this partition will be + * written and from which they may be read. + * + * @return The replication count for the index partition. + */ + public int getDataServiceCount() { + + return dataServices.length; + + } + + /** + * The ordered list of data services on which the data for this partition + * will be written and from which the data for this partition may be read. + * The first data service is always the primary. Writes SHOULD be pipelined + * from the primary to the secondaries in the same order as they appear in + * this array. + * + * @return A copy of the array of data service identifiers. + */ + public UUID[] getDataServices() { + + return dataServices.clone(); + + } + + /** * The #of live index segments (those having data that must be included * to construct a fused view representing the current state of the *************** *** 119,125 **** int count = 0; ! for (int i = 0; i < segs.length; i++) { ! if (segs[i].state == ResourceState.Live) count++; --- 169,175 ---- int count = 0; ! for (int i = 0; i < resources.length; i++) { ! if (resources[i].state() == ResourceState.Live) count++; *************** *** 141,149 **** int k = 0; ! for (int i = 0; i < segs.length; i++) { ! if (segs[i].state == ResourceState.Live) { ! files[k++] = segs[i].filename; } --- 191,199 ---- int k = 0; ! for (int i = 0; i < resources.length; i++) { ! if (resources[i].state() == ResourceState.Live) { ! files[k++] = resources[i].getFile(); } *************** *** 166,175 **** return false; ! if (segs.length != o2.segs.length) return false; ! for (int i = 0; i < segs.length; i++) { ! if (!segs[i].equals(o2.segs[i])) return false; --- 216,235 ---- return false; ! if (dataServices.length != o2.dataServices.length) return false; ! if (resources.length != o2.resources.length) ! return false; ! for (int i = 0; i < dataServices.length; i++) { ! ! if (!dataServices[i].equals(o2.dataServices[i])) ! return false; ! ! } ! ! for (int i = 0; i < resources.length; i++) { ! ! if (!resources[i].equals(o2.resources[i])) return false; *************** *** 186,279 **** } - // /** - // * The metadata about an index segment life cycle as served by a - // * specific service instance on some host. - // * - // * @todo we need to track load information for the service and the host. - // * however that information probably does not need to be restart - // * safe so it is easily maintained within a rather small hashmap - // * indexed by the service address. - // * - // * @author <a href="mailto:tho...@us...">Bryan - // * Thompson</a> - // * @version $Id$ - // */ - // public static class IndexSegmentServiceMetadata { - // - // /** - // * The service that is handling this index segment. This service - // * typically handles many index segments and multiplexes them on a - // * single journal. - // * - // * @todo When a client looks up an index segment in the metadata index, - // * what we send them is the set of key-addr entries from the leaf - // * in which the index segment was found. If a request by the - // * client to that service discovers that the service no longer - // * handles a key range, that the service is dead, etc., then the - // * client will have to invalidate its cache entry and lookup the - // * current location of the index segment in the metadata index. - // */ - // public InetSocketAddress addr; - // - // } - // - // /** - // * An array of the services that are registered as handling this index - // * segment. One of these services is the master and accepts writes from - // * the client. The other services mirror the segment and provide - // * redundency for failover and load balancing. The order in which the - // * segments are listed in this array could reflect the master (at - // * position zero) and the write pipeline from the master to the - // * secondaries could be simply the order of the entries in the array. - // */ - // public IndexSegmentServiceMetadata services[]; - // - // /** - // * The time that the index segment was started on that service. - // */ - // public long startTime; - // - // /** - // * A log of events for the index segment. This could just be a linked - // * list of strings that get serialized as a single string. Each event is - // * then a semi-structured string, typically generated by a purpose - // * specific logging appender. - // */ - // public Vector<Event> eventLog; - // - // public PartitionMetadata(InetSocketAddress addr) { - // } - // - // /** - // * An event for an index segment. - // * - // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - // * @version $Id$ - // */ - // public static class Event { - // - //// public long timestamp; - // - // public String msg; - // - // /** - // * Serialization for an event. - // * - // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - // * @version $Id$ - // */ - // public static class Serializer /*implements ...*/{ - // - // } - // - // } - /** * Serialization for an index segment metadata entry. * ! * FIXME implement {@link Externalizable} and use explicit versioning. ! * ! * FIXME assumes that resources are {@link IndexSegment}s rather than ! * either index segments or journals. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 246,255 ---- } /** * Serialization for an index segment metadata entry. * ! * FIXME convert to use {@link UnisolatedBTree} (so byte[] values that we ! * (de-)serialize one a one-by-one basis ourselves), implement ! * {@link Externalizable} and use explicit versioning and packed integers. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> *************** *** 289,292 **** --- 265,270 ---- public transient static final PartitionMetadata.Serializer INSTANCE = new Serializer(); + // private static final transient int VERSION0 = 0x0; + public Serializer() { } *************** *** 294,323 **** public void putValues(DataOutputStream os, Object[] values, int nvals) throws IOException { ! for (int i = 0; i < nvals; i++) { PartitionMetadata val = (PartitionMetadata) values[i]; ! final int nsegs = val.segs.length; ! os.writeInt(val.partId); ! os.writeInt(val.nextSegId); ! ! os.writeInt(nsegs); ! for (int j = 0; j < nsegs; j++) { ! SegmentMetadata segmentMetadata = val.segs[j]; ! os.writeUTF(segmentMetadata.filename); ! os.writeLong(segmentMetadata.nbytes); ! os.writeInt(segmentMetadata.state.valueOf()); ! os.writeLong(segmentMetadata.uuid.getMostSignificantBits()); ! os.writeLong(segmentMetadata.uuid.getLeastSignificantBits()); } --- 272,317 ---- public void putValues(DataOutputStream os, Object[] values, int nvals) throws IOException { ! for (int i = 0; i < nvals; i++) { PartitionMetadata val = (PartitionMetadata) values[i]; ! final int nservices = val.dataServices.length; ! ! final int nresources = val.resources.length; ! os.writeInt(val.partId); + + os.writeInt(nservices); ! os.writeInt(nresources); ! for( int j=0; j<nservices; j++) { ! ! final UUID serviceUUID = val.dataServices[j]; ! ! os.writeLong(serviceUUID.getMostSignificantBits()); ! ! os.writeLong(serviceUUID.getLeastSignificantBits()); ! ! } ! ! for (int j = 0; j < nresources; j++) { ! IResourceMetadata rmd = val.resources[j]; ! os.writeBoolean(rmd.isIndexSegment()); ! ! os.writeUTF(rmd.getFile()); ! os.writeLong(rmd.size()); ! os.writeInt(rmd.state().valueOf()); ! final UUID resourceUUID = rmd.getUUID(); ! os.writeLong(resourceUUID.getMostSignificantBits()); ! ! os.writeLong(resourceUUID.getLeastSignificantBits()); } *************** *** 334,360 **** final int partId = is.readInt(); ! final int nextSegId = is.readInt(); ! ! final int nsegs = is.readInt(); ! PartitionMetadata val = new PartitionMetadata(partId, ! nextSegId, new SegmentMetadata[nsegs]); ! for (int j = 0; j < nsegs; j++) { String filename = is.readUTF(); long nbytes = is.readLong(); ! ResourceState state = ResourceState ! .valueOf(is.readInt()); UUID uuid = new UUID(is.readLong()/*MSB*/,is.readLong()/*LSB*/); ! val.segs[j] = new SegmentMetadata(filename, nbytes, state, uuid); } ! values[i] = val; } --- 328,364 ---- final int partId = is.readInt(); ! final int nservices = is.readInt(); ! final int nresources = is.readInt(); ! ! final UUID[] services = new UUID[nservices]; ! ! final IResourceMetadata[] resources = new IResourceMetadata[nresources]; ! for (int j = 0; j < nservices; j++) { + services[j] = new UUID(is.readLong()/*MSB*/,is.readLong()/*LSB*/); + + } + + for (int j = 0; j < nresources; j++) { + + boolean isIndexSegment = is.readBoolean(); + String filename = is.readUTF(); long nbytes = is.readLong(); ! ResourceState state = ResourceState.valueOf(is.readInt()); UUID uuid = new UUID(is.readLong()/*MSB*/,is.readLong()/*LSB*/); ! resources[j] = (isIndexSegment ? new SegmentMetadata( ! filename, nbytes, state, uuid) ! : new JournalMetadata(filename, nbytes, state, uuid)); } ! values[i] = new PartitionMetadata(partId, services, resources); } Index: MasterJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/MasterJournal.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** MasterJournal.java 27 Mar 2007 17:11:41 -0000 1.5 --- MasterJournal.java 29 Mar 2007 17:01:33 -0000 1.6 *************** *** 371,375 **** * <code>segment.dir</code> - The property whose value is the name of * the top level directory beneath which new segment files will be ! * created. When not specified the default is a directory named "<i>segs" * in the directory named by the {@link #JOURNAL_DIR} property. */ --- 371,375 ---- * <code>segment.dir</code> - The property whose value is the name of * the top level directory beneath which new segment files will be ! * created. When not specified the default is a directory named "<i>resources" * in the directory named by the {@link #JOURNAL_DIR} property. */ *************** *** 444,448 **** val = properties.getProperty(Options.SEGMENT_DIR); ! segmentDir = val == null?new File(journalDir,"segs"):new File(val); if(segmentDir.exists() && !segmentDir.isDirectory()) { --- 444,448 ---- val = properties.getProperty(Options.SEGMENT_DIR); ! segmentDir = val == null?new File(journalDir,"resources"):new File(val); if(segmentDir.exists() && !segmentDir.isDirectory()) { *************** *** 523,527 **** /* ! * Create the initial slave slave. */ this.slave = createSlave(this,properties); --- 523,527 ---- /* ! * Create the initial slave journal. */ this.slave = createSlave(this,properties); *************** *** 631,636 **** * respects transaction isolation. */ ! public void overflow() { ! /* * Create the new buffer. --- 631,640 ---- * respects transaction isolation. */ ! public boolean overflow() { ! ! System.err.println("*** Overflow *** "); ! ! final Object state = willOverflow(); ! /* * Create the new buffer. *************** *** 675,681 **** --- 679,694 ---- // immediate shutdown of the old journal. oldJournal.closeAndDelete(); + + didOverflow(state); + + // handled overflow by opening a new journal. + return true; } + protected Object willOverflow() {return null;} + + protected void didOverflow(Object state) {} + /** * Does not queue indices for eviction, forcing the old journal to remain *************** *** 901,907 **** final PartitionMetadata pmd = mdi.get(separatorKey); - // the next segment identifier to be assigned. - final int segId = pmd.nextSegId; - if (pmd.getLiveCount()==0) { --- 914,917 ---- *************** *** 917,926 **** */ ! File outFile = getSegmentFile(name,pmd.partId,segId); IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, tmpDir, oldIndex.btree.getEntryCount(), oldIndex.btree ! .getRoot().entryIterator(), mseg, ! Value.Serializer.INSTANCE, true/* useChecksum */, null/* new RecordCompressor() */, 0d, oldIndex.btree .getIndexUUID()); --- 927,937 ---- */ ! File outFile = getSegmentFile(name,pmd.partId); IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, tmpDir, oldIndex.btree.getEntryCount(), oldIndex.btree ! .getRoot().entryIterator(), mseg, oldIndex ! .getBTree().getNodeSerializer() ! .getValueSerializer(), true/* useChecksum */, null/* new RecordCompressor() */, 0d, oldIndex.btree .getIndexUUID()); *************** *** 929,933 **** * update the metadata index for this partition. */ ! mdi.put(separatorKey, new PartitionMetadata(0, segId + 1, new SegmentMetadata[] { new SegmentMetadata("" + outFile, outFile.length(), ResourceState.Live, --- 940,944 ---- * update the metadata index for this partition. */ ! mdi.put(separatorKey, new PartitionMetadata(0, pmd.dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile, outFile.length(), ResourceState.Live, *************** *** 955,959 **** // output file for the merged segment. ! File outFile = getSegmentFile(name, pmd.partId, segId); // merge the data from the btree on the slave and the index --- 966,970 ---- // output file for the merged segment. ! File outFile = getSegmentFile(name, pmd.partId); // merge the data from the btree on the slave and the index *************** *** 965,969 **** IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, null, mergeItr.nentries, new MergedEntryIterator(mergeItr), ! mseg, oldIndex.btree.getNodeSerializer() .getValueSerializer(), false/* useChecksum */, null/* recordCompressor */, 0d/* errorRate */, --- 976,980 ---- IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, null, mergeItr.nentries, new MergedEntryIterator(mergeItr), ! mseg, oldIndex.getBTree().getNodeSerializer() .getValueSerializer(), false/* useChecksum */, null/* recordCompressor */, 0d/* errorRate */, *************** *** 999,1003 **** // assume only the last segment is live. ! final SegmentMetadata oldSeg = pmd.segs[pmd.segs.length-1]; newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, --- 1010,1014 ---- // assume only the last segment is live. ! final SegmentMetadata oldSeg = (SegmentMetadata)pmd.resources[pmd.resources.length-1]; newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, *************** *** 1007,1011 **** .length(), ResourceState.Live, builder.segmentUUID); ! mdi.put(separatorKey, new PartitionMetadata(0, segId + 1, newSegs)); // /* --- 1018,1022 ---- .length(), ResourceState.Live, builder.segmentUUID); ! mdi.put(separatorKey, new PartitionMetadata(0, pmd.dataServices, newSegs)); // /* *************** *** 1026,1032 **** // assuming at most one dead and one live segment. ! if(pmd.segs.length>1) { ! final SegmentMetadata deadSeg = pmd.segs[0]; if(deadSeg.state!=ResourceState.Dead) { --- 1037,1043 ---- // assuming at most one dead and one live segment. ! if(pmd.resources.length>1) { ! final SegmentMetadata deadSeg = (SegmentMetadata)pmd.resources[0]; if(deadSeg.state!=ResourceState.Dead) { *************** *** 1068,1073 **** * The unique within index partition identifier - see * {@link PartitionMetadata#partId}. - * @param segId - * The unique within partition segment identifier. * * @todo munge the index name so that we can support unicode index names in --- 1079,1082 ---- *************** *** 1077,1081 **** * segmentId in the filenames. */ ! protected File getSegmentFile(String name,int partId,int segId) { File parent = getPartitionDirectory(name,partId); --- 1086,1090 ---- * segmentId in the filenames. */ ! protected File getSegmentFile(String name,int partId) { File parent = getPartitionDirectory(name,partId); *************** *** 1087,1093 **** } ! File file = new File(parent, segId + SEG); ! ! return file; } --- 1096,1108 ---- } ! try { ! ! return File.createTempFile(name, SEG, parent); ! ! } catch(IOException ex) { ! ! throw new RuntimeException(ex); ! ! } } *************** *** 1320,1324 **** // open the file. IndexSegmentFileStore fileStore = new IndexSegmentFileStore( ! resource.getFile()); // load the btree. --- 1335,1339 ---- // open the file. IndexSegmentFileStore fileStore = new IndexSegmentFileStore( ! new File(resource.getFile())); // load the btree. |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:39
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16125/src/test/com/bigdata/scaleup Modified Files: TestMetadataIndex.java TestPartitionedJournal.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: TestMetadataIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestMetadataIndex.java,v retrieving revision 1.13 retrieving revision 1.14 diff -C2 -d -r1.13 -r1.14 *** TestMetadataIndex.java 27 Mar 2007 17:11:42 -0000 1.13 --- TestMetadataIndex.java 29 Mar 2007 17:01:34 -0000 1.14 *************** *** 159,164 **** final int partId0 = 0; ! ! PartitionMetadata part1 = new PartitionMetadata(partId0); assertEquals(null,md.put(key0, part1)); --- 159,166 ---- final int partId0 = 0; ! ! final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; ! ! PartitionMetadata part1 = new PartitionMetadata(partId0,dataServices); assertEquals(null,md.put(key0, part1)); *************** *** 169,173 **** final UUID segmentUUID_b = UUID.randomUUID(); ! PartitionMetadata part2 = new PartitionMetadata(partId0, 1, new SegmentMetadata[] { new SegmentMetadata("a", 10L, ResourceState.Live, segmentUUID_a) }); --- 171,175 ---- final UUID segmentUUID_b = UUID.randomUUID(); ! PartitionMetadata part2 = new PartitionMetadata(partId0, dataServices, new SegmentMetadata[] { new SegmentMetadata("a", 10L, ResourceState.Live, segmentUUID_a) }); *************** *** 177,181 **** assertEquals(part2,md.get(key0)); ! PartitionMetadata part3 = new PartitionMetadata(partId0, 2, new SegmentMetadata[] { new SegmentMetadata("a", 10L, ResourceState.Live, --- 179,183 ---- assertEquals(part2,md.get(key0)); ! PartitionMetadata part3 = new PartitionMetadata(partId0, dataServices, new SegmentMetadata[] { new SegmentMetadata("a", 10L, ResourceState.Live, *************** *** 251,255 **** */ final int partId0 = 0; ! PartitionMetadata part0 = new PartitionMetadata(partId0); assertEquals(null,md.put(key0, part0)); assertEquals(part0,md.get(key0)); --- 253,258 ---- */ final int partId0 = 0; ! final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; ! PartitionMetadata part0 = new PartitionMetadata(partId0,dataServices); assertEquals(null,md.put(key0, part0)); assertEquals(part0,md.get(key0)); *************** *** 259,263 **** final int partId1 = 1; ! PartitionMetadata part1 = new PartitionMetadata(partId1,1, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a) }); assertEquals(null,md.put(key1, part1)); --- 262,266 ---- final int partId1 = 1; ! PartitionMetadata part1 = new PartitionMetadata(partId1,dataServices, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a) }); assertEquals(null,md.put(key1, part1)); *************** *** 265,269 **** final int partId2 = 2; ! PartitionMetadata part2 = new PartitionMetadata(partId2,2, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a), new SegmentMetadata("b", 20L,ResourceState.Live, segmentUUID_b) }); --- 268,272 ---- final int partId2 = 2; ! PartitionMetadata part2 = new PartitionMetadata(partId2,dataServices, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a), new SegmentMetadata("b", 20L,ResourceState.Live, segmentUUID_b) }); *************** *** 347,351 **** */ final int partId0 = 0; ! PartitionMetadata part0 = new PartitionMetadata(partId0); assertEquals(null,md.put(key0, part0)); assertEquals(part0,md.get(key0)); --- 350,355 ---- */ final int partId0 = 0; ! final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; ! PartitionMetadata part0 = new PartitionMetadata(partId0,dataServices); assertEquals(null,md.put(key0, part0)); assertEquals(part0,md.get(key0)); *************** *** 355,359 **** final int partId1 = 1; ! PartitionMetadata part1 = new PartitionMetadata(partId1,1, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a) }); assertEquals(null,md.put(key1, part1)); --- 359,363 ---- final int partId1 = 1; ! PartitionMetadata part1 = new PartitionMetadata(partId1,dataServices, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a) }); assertEquals(null,md.put(key1, part1)); *************** *** 361,365 **** final int partId2 = 2; ! PartitionMetadata part2 = new PartitionMetadata(partId2,2, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a), new SegmentMetadata("b", 20L,ResourceState.Live, segmentUUID_b) }); --- 365,369 ---- final int partId2 = 2; ! PartitionMetadata part2 = new PartitionMetadata(partId2,dataServices, new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a), new SegmentMetadata("b", 20L,ResourceState.Live, segmentUUID_b) }); *************** *** 429,432 **** --- 433,438 ---- Journal store = new Journal(properties); + final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; + final UUID indexUUID = UUID.randomUUID(); *************** *** 438,442 **** // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0)); // btree to be filled with data. --- 444,448 ---- // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0,dataServices)); // btree to be filled with data. *************** *** 483,487 **** * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 1, new SegmentMetadata[] { new SegmentMetadata("" + outFile, outFile.length(), ResourceState.Live, --- 489,493 ---- * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile, outFile.length(), ResourceState.Live, *************** *** 537,540 **** --- 543,548 ---- Journal store = new Journal(properties); + final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; + final UUID indexUUID = UUID.randomUUID(); *************** *** 546,550 **** // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0)); // btree to be filled with data. --- 554,558 ---- // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0,dataServices)); // btree to be filled with data. *************** *** 593,597 **** * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 2, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), ResourceState.Live, --- 601,605 ---- * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), ResourceState.Live, *************** *** 650,654 **** * has been replaced by the merged result (index segment 02). */ ! md.put(new byte[] {}, new PartitionMetadata(0, 3, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), --- 658,662 ---- * has been replaced by the merged result (index segment 02). */ ! md.put(new byte[] {}, new PartitionMetadata(0, dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), *************** *** 719,722 **** --- 727,732 ---- Journal store = new Journal(properties); + final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; + final UUID indexUUID = UUID.randomUUID(); *************** *** 728,732 **** // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0)); /* --- 738,742 ---- // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0,dataServices)); /* *************** *** 869,873 **** * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 2, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), --- 879,883 ---- * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), *************** *** 934,938 **** */ PartitionMetadata oldpart = md.put(new byte[] {}, ! new PartitionMetadata(0, nextSegId++, new SegmentMetadata[] { new SegmentMetadata("" + outFile02, outFile02 --- 944,948 ---- */ PartitionMetadata oldpart = md.put(new byte[] {}, ! new PartitionMetadata(0, dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile02, outFile02 *************** *** 958,962 **** seg.close(); ! new File(oldpart.segs[0].filename).delete(); // this is now the current index segment. --- 968,972 ---- seg.close(); ! new File(oldpart.resources[0].getFile()).delete(); // this is now the current index segment. *************** *** 985,989 **** seg.close(); ! new File(md.get(new byte[]{}).segs[0].filename).delete(); System.err.println("End of stress test: ntrial="+ntrials+", nops="+nops); --- 995,999 ---- seg.close(); ! new File(md.get(new byte[]{}).resources[0].getFile()).delete(); System.err.println("End of stress test: ntrial="+ntrials+", nops="+nops); *************** *** 1017,1020 **** --- 1027,1032 ---- Journal store = new Journal(properties); + final UUID[] dataServices = new UUID[]{UUID.randomUUID(),UUID.randomUUID()}; + final UUID indexUUID = UUID.randomUUID(); *************** *** 1026,1030 **** // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0)); // btree to be filled with data. --- 1038,1042 ---- // define a single partition with no segments. ! md.put(new byte[]{}, new PartitionMetadata(0,dataServices)); // btree to be filled with data. *************** *** 1072,1076 **** * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0,1, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(),ResourceState.Live, builder1.segmentUUID) })); --- 1084,1088 ---- * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0,dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(),ResourceState.Live, builder1.segmentUUID) })); *************** *** 1116,1120 **** * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 1, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), --- 1128,1132 ---- * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, dataServices, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, outFile01.length(), Index: TestPartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestPartitionedJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** TestPartitionedJournal.java 27 Mar 2007 14:34:24 -0000 1.8 --- TestPartitionedJournal.java 29 Mar 2007 17:01:34 -0000 1.9 *************** *** 58,62 **** --- 58,64 ---- import com.bigdata.journal.Journal; import com.bigdata.objndx.AbstractBTreeTestCase; + import com.bigdata.objndx.BTree; import com.bigdata.objndx.BatchInsert; + import com.bigdata.objndx.ByteArrayValueSerializer; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.KeyBuilder; *************** *** 127,134 **** /** ! * Test the ability to register and use named index, including whether the ! * named index is restart safe. */ ! public void test_registerAndUse() { Properties properties = getProperties(); --- 129,202 ---- /** ! * Test the ability to register and use a named index that does NOT support ! * transactional isolation, including whether the named index is restart ! * safe. */ ! public void test_registerAndUse_noIsolation() { ! ! Properties properties = getProperties(); ! ! properties.setProperty(Options.DELETE_ON_CLOSE, "false"); ! ! properties.setProperty(Options.BASENAME,getName()); ! ! MasterJournal journal = new MasterJournal(properties); ! ! final String name = "abc"; ! ! IIndex index = new BTree(journal, 3, UUID.randomUUID(), ByteArrayValueSerializer.INSTANCE); ! ! assertNull(journal.getIndex(name)); ! ! index = journal.registerIndex(name, index); ! ! assertTrue(journal.getIndex(name) instanceof PartitionedIndexView); ! ! assertEquals("name", name, ((PartitionedIndexView) journal.getIndex(name)) ! .getName()); ! ! MetadataIndex mdi = journal.getSlave().getMetadataIndex(name); ! ! assertEquals("mdi.entryCount", 1, mdi.getEntryCount()); ! ! final byte[] k0 = new byte[]{0}; ! final byte[] v0 = new byte[]{0}; ! ! index.insert( k0, v0); ! ! /* ! * commit and close the journal ! */ ! journal.commit(); ! ! journal.close(); ! ! if (journal.isStable()) { ! ! /* ! * re-open the journal and test restart safety. ! */ ! journal = new MasterJournal(properties); ! ! index = (PartitionedIndexView) journal.getIndex(name); ! ! assertNotNull("btree", index); ! assertEquals("entryCount", 1, ((PartitionedIndexView)index).getBTree().getEntryCount()); ! assertEquals(v0, (byte[])index.lookup(k0)); ! ! journal.dropIndex(name); ! ! journal.close(); ! ! } ! ! } ! ! /** ! * Test the ability to register and use a named index that supports ! * transactional isolation, including whether the named index is restart ! * safe. ! */ ! public void test_registerAndUse_isolation() { Properties properties = getProperties(); *************** *** 259,263 **** assertEquals("#partitions",1,mdi.getEntryCount()); ! assertEquals("#segments",0,mdi.get(new byte[]{}).segs.length); /* --- 327,331 ---- assertEquals("#partitions",1,mdi.getEntryCount()); ! assertEquals("#segments",0,mdi.get(new byte[]{}).resources.length); /* *************** *** 308,312 **** assertEquals("#partitions",1,mdi.getEntryCount()); ! assertEquals("#segments",1,mdi.get(new byte[]{}).segs.length); /* --- 376,380 ---- assertEquals("#partitions",1,mdi.getEntryCount()); ! assertEquals("#segments",1,mdi.get(new byte[]{}).resources.length); /* *************** *** 387,403 **** assertEquals("partId",0,pmd.partId); - assertEquals("nextSegId",trial+1,pmd.nextSegId); - assertEquals("#segments", 1, pmd.getLiveCount()); ! if(pmd.segs.length>1) { ! assertEquals("#segments",2,pmd.segs.length); assertEquals("state", ResourceState.Dead, ! pmd.segs[0].state); assertEquals("state", ResourceState.Live, ! pmd.segs[1].state); } --- 455,469 ---- assertEquals("partId",0,pmd.partId); assertEquals("#segments", 1, pmd.getLiveCount()); ! if(pmd.resources.length>1) { ! assertEquals("#segments",2,pmd.resources.length); assertEquals("state", ResourceState.Dead, ! pmd.resources[0].state()); assertEquals("state", ResourceState.Live, ! pmd.resources[1].state()); } |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:38
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16125/src/java/com/bigdata/journal Modified Files: IJournal.java ResourceManager.java AbstractBufferStrategy.java RootBlockView.java AbstractJournal.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: RootBlockView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/RootBlockView.java,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -d -r1.15 -r1.16 *** RootBlockView.java 27 Mar 2007 14:34:23 -0000 1.15 --- RootBlockView.java 29 Mar 2007 17:01:33 -0000 1.16 *************** *** 392,396 **** sb.append(", commitRecordAddr="+Addr.toString(getCommitRecordAddr())); sb.append(", commitRecordIndexAddr="+Addr.toString(getCommitRecordIndexAddr())); ! sb.append(", segmentUUID="+getUUID()); sb.append("}"); --- 392,396 ---- sb.append(", commitRecordAddr="+Addr.toString(getCommitRecordAddr())); sb.append(", commitRecordIndexAddr="+Addr.toString(getCommitRecordIndexAddr())); ! sb.append(", uuid="+getUUID()); sb.append("}"); Index: ResourceManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ResourceManager.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ResourceManager.java 15 Mar 2007 16:11:12 -0000 1.1 --- ResourceManager.java 29 Mar 2007 17:01:33 -0000 1.2 *************** *** 218,222 **** * extension metadata record at this time. this means that we can not * aggregate events for index segments for a given named index at this ! * time. */ static public void openIndexSegment(String name, String filename, long nbytes) { --- 218,222 ---- * extension metadata record at this time. this means that we can not * aggregate events for index segments for a given named index at this ! * time (actually, we can aggregate them by the indexUUID). */ static public void openIndexSegment(String name, String filename, long nbytes) { Index: AbstractJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** AbstractJournal.java 27 Mar 2007 17:11:41 -0000 1.8 --- AbstractJournal.java 29 Mar 2007 17:01:33 -0000 1.9 *************** *** 308,312 **** public AbstractJournal(Properties properties) { - // int segmentId; long initialExtent = Options.DEFAULT_INITIAL_EXTENT; long maximumExtent = Options.DEFAULT_MAXIMUM_EXTENT; --- 308,311 ---- *************** *** 337,345 **** val = properties.getProperty(Options.BUFFER_MODE); ! if (val == null) val = BufferMode.Direct.toString(); BufferMode bufferMode = BufferMode.parse(val); /* * "useDirectBuffers" --- 336,349 ---- val = properties.getProperty(Options.BUFFER_MODE); ! if (val == null) { ! val = BufferMode.Direct.toString(); + + } BufferMode bufferMode = BufferMode.parse(val); + System.err.println(Options.BUFFER_MODE+"="+bufferMode); + /* * "useDirectBuffers" *************** *** 410,414 **** throw new RuntimeException("The '" + Options.MAXIMUM_EXTENT ! + "' is less than the initial extent."); } --- 414,419 ---- throw new RuntimeException("The '" + Options.MAXIMUM_EXTENT ! + "' (" + maximumExtent ! + ") is less than the initial extent ("+initialExtent+")."); } *************** *** 1154,1160 **** final int nextOffset = _bufferStrategy.getNextOffset(); ! if (nextOffset > .9 * maximumExtent) { ! overflow(); } --- 1159,1185 ---- final int nextOffset = _bufferStrategy.getNextOffset(); ! /* ! * Choose maximum of the target maximum extent and the current user ! * data extent so that we do not re-trigger overflow immediately if ! * the buffer has been extended beyond the target maximum extent. ! * Among other things this lets you run the buffer up to a ! * relatively large extent (if you use a disk-only mode since you ! * will run out of memory if you use a fully buffered mode). ! */ ! final long limit = Math.max(maximumExtent, _bufferStrategy ! .getUserExtent()); ! ! if (nextOffset > .9 * limit) { ! if( overflow() ) { ! ! /* ! * Someone handled the overflow event by opening a new ! * journal to absorb further writes. ! */ ! ResourceManager.overflowJournal(getFile() == null ? null ! : getFile().toString(), size()); ! ! } } *************** *** 1168,1179 **** /** * Note: This implementation does not handle overflow of the journal. The ! * journal capacity will simply be extended until the available resources ! * are exhausted. */ ! public void overflow() { ! ! ResourceManager.overflowJournal(getFile() == null ? null : getFile() ! .toString(), size()); } --- 1193,1206 ---- /** * Note: This implementation does not handle overflow of the journal. The ! * journal capacity will simply be extended by {@link #write(ByteBuffer)} ! * until the available resources are exhausted. ! * ! * @return This implementation returns <code>false</code> since it does ! * NOT open a new journal. */ ! public boolean overflow() { + return false; + } *************** *** 1624,1628 **** // report event. ! ResourceManager.openUnisolatedBTree(name); } --- 1651,1655 ---- // report event. ! ResourceManager.dropUnisolatedBTree(name); } Index: AbstractBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java,v retrieving revision 1.14 retrieving revision 1.15 diff -C2 -d -r1.14 -r1.15 *** AbstractBufferStrategy.java 27 Mar 2007 17:11:41 -0000 1.14 --- AbstractBufferStrategy.java 29 Mar 2007 17:01:33 -0000 1.15 *************** *** 131,135 **** * operation should be retried. */ ! public boolean overflow(int needed) { final long userExtent = getUserExtent(); --- 131,135 ---- * operation should be retried. */ ! final public boolean overflow(int needed) { final long userExtent = getUserExtent(); Index: IJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IJournal.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** IJournal.java 15 Mar 2007 16:11:12 -0000 1.10 --- IJournal.java 29 Mar 2007 17:01:32 -0000 1.11 *************** *** 83,88 **** * event is not handled then the journal will automatically extent itself * until it either runs out of address space (int32) or other resources. */ ! public void overflow(); /** --- 83,92 ---- * event is not handled then the journal will automatically extent itself * until it either runs out of address space (int32) or other resources. + * + * @return true iff the overflow event was handled (e.g., if a new journal + * was created to absorb subsequent writes). if a new journal is NOT + * opened then this method should return false. */ ! public boolean overflow(); /** |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:38
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16125/src/java/com/bigdata/isolation Modified Files: IsolatableFusedView.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: IsolatableFusedView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IsolatableFusedView.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** IsolatableFusedView.java 11 Mar 2007 11:42:44 -0000 1.4 --- IsolatableFusedView.java 29 Mar 2007 17:01:34 -0000 1.5 *************** *** 49,61 **** import com.bigdata.objndx.AbstractBTree; import com.bigdata.objndx.ReadOnlyFusedView; /** ! * An {@link IFusedView} that understands how to process delete markers. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ - * - * @todo refactor to isolate and override the merge rule. */ public class IsolatableFusedView extends ReadOnlyFusedView implements IIsolatableIndex { --- 49,87 ---- import com.bigdata.objndx.AbstractBTree; + import com.bigdata.objndx.IBatchBTree; + import com.bigdata.objndx.IEntryIterator; import com.bigdata.objndx.ReadOnlyFusedView; + import com.bigdata.scaleup.PartitionedIndexView; /** ! * An read-only {@link IFusedView} that supports transactions and deletion ! * markers. ! * <p> ! * Processing deletion markers requires that the source(s) for an index ! * partition view are read in order from the most recent to the earliest ! * historical resource. The first entry for the key in any source is the value ! * that will be reported on a read. If the entry is deleted, then the read will ! * report that no entry exists for that key. ! * <p> ! * Note that deletion markers can exist in both historical journals and index ! * segments having data for the view. Deletion markers are expunged from index ! * segments only by a full compacting merge of all index segments having life ! * data for the partition. ! * <p> ! * Implementation note: the {@link IBatchBTree} operations are inherited from ! * the base class. Only non-batch read operations are overriden by this class. ! * ! * FIXME implement; support processing of delete markers (including handling of ! * the merge rule) - basically they have to be processed on read so that a ! * delete on the mutable btree overrides an historical value, and a deletion ! * marker in a more recent index segment overrides a deletion marker in an ! * earlier index segment. Deletion markers can exist in the both mutable btree ! * and in index segments that are not either a clean first eviction or a full ! * compacting merge (e.g., they can still exist in a compacting merge if there ! * are other index segments or btrees that are part of a partition but are not ! * partitipating in the compacting merge). * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class IsolatableFusedView extends ReadOnlyFusedView implements IIsolatableIndex { *************** *** 76,78 **** --- 102,128 ---- } + public boolean contains(byte[] key) { + + throw new UnsupportedOperationException(); + + } + + public Object lookup(Object key) { + + throw new UnsupportedOperationException(); + + } + + public int rangeCount(byte[] fromKey, byte[] toKey) { + + throw new UnsupportedOperationException(); + + } + + public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { + + throw new UnsupportedOperationException(); + + } + } |
From: Bryan T. <tho...@us...> - 2007-03-29 17:01:37
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16125/src/java/com/bigdata/objndx Modified Files: IndexSegmentMerger.java ReadOnlyFusedView.java Log Message: Fixed bug in overflow handling for triple store. Added DataService UUID[] to partition metadata. Index: ReadOnlyFusedView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/ReadOnlyFusedView.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ReadOnlyFusedView.java 27 Mar 2007 14:34:22 -0000 1.2 --- ReadOnlyFusedView.java 29 Mar 2007 17:01:31 -0000 1.3 *************** *** 55,59 **** * <p> * A fused view providing read-only operations on multiple B+-Trees mapping ! * variable length unsigned byte[] keys to arbitrary values. * </p> * --- 55,60 ---- * <p> * A fused view providing read-only operations on multiple B+-Trees mapping ! * variable length unsigned byte[] keys to arbitrary values. This class does NOT ! * handle version counters or deletion markers. * </p> * *************** *** 61,66 **** * @version $Id$ * ! * @todo support N sources for a {@link ReadOnlyFusedView} by chaining together multiple ! * {@link ReadOnlyFusedView} instances if not in a more efficient manner. */ public class ReadOnlyFusedView implements IIndex, IFusedView { --- 62,68 ---- * @version $Id$ * ! * @todo support N sources for a {@link ReadOnlyFusedView} by chaining together ! * multiple {@link ReadOnlyFusedView} instances if not in a more efficient ! * manner. */ public class ReadOnlyFusedView implements IIndex, IFusedView { Index: IndexSegmentMerger.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/IndexSegmentMerger.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** IndexSegmentMerger.java 11 Mar 2007 11:41:44 -0000 1.11 --- IndexSegmentMerger.java 29 Mar 2007 17:01:31 -0000 1.12 *************** *** 184,187 **** --- 184,189 ---- public IndexSegmentMerger(int m, AbstractBTree[] srcs) throws IOException { + // @todo validate sources for the same indexUUID. + throw new UnsupportedOperationException(); |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:04
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/test/com/bigdata/journal Modified Files: TestDiskJournal.java TestTransientJournal.java TestMappedJournal.java TestDirectJournal.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: TestDirectJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestDirectJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** TestDirectJournal.java 22 Feb 2007 16:59:34 -0000 1.8 --- TestDirectJournal.java 27 Mar 2007 17:11:40 -0000 1.9 *************** *** 143,147 **** assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, bufferStrategy.getMaximumExtent()); assertNotNull("raf", bufferStrategy.raf); --- 143,147 ---- assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, 0L /*soft limit for Direct buffer*/, bufferStrategy.getMaximumExtent()); assertNotNull("raf", bufferStrategy.raf); Index: TestTransientJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTransientJournal.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TestTransientJournal.java 21 Feb 2007 20:17:20 -0000 1.7 --- TestTransientJournal.java 27 Mar 2007 17:11:40 -0000 1.8 *************** *** 139,143 **** assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, bufferStrategy.getMaximumExtent()); assertEquals(Options.BUFFER_MODE, BufferMode.Transient, bufferStrategy --- 139,143 ---- assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, 0L/*soft limit for transient mode*/, bufferStrategy.getMaximumExtent()); assertEquals(Options.BUFFER_MODE, BufferMode.Transient, bufferStrategy Index: TestDiskJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestDiskJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** TestDiskJournal.java 22 Feb 2007 16:59:34 -0000 1.8 --- TestDiskJournal.java 27 Mar 2007 17:11:40 -0000 1.9 *************** *** 142,146 **** assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, bufferStrategy.getMaximumExtent()); assertNotNull("raf", bufferStrategy.raf); --- 142,146 ---- assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, 0L/*soft limit for disk mode*/, bufferStrategy.getMaximumExtent()); assertNotNull("raf", bufferStrategy.raf); Index: TestMappedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestMappedJournal.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestMappedJournal.java 22 Feb 2007 16:59:34 -0000 1.9 --- TestMappedJournal.java 27 Mar 2007 17:11:40 -0000 1.10 *************** *** 143,147 **** assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, bufferStrategy.getMaximumExtent()); assertNotNull("raf", bufferStrategy.raf); --- 143,149 ---- assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, bufferStrategy.getInitialExtent()); ! assertEquals( ! Options.MAXIMUM_EXTENT, ! Options.DEFAULT_MAXIMUM_EXTENT /* hard limit for mapped mode. */, bufferStrategy.getMaximumExtent()); assertNotNull("raf", bufferStrategy.raf); |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:04
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/java/com/bigdata/journal Modified Files: IBufferStrategy.java AbstractBufferStrategy.java ForceEnum.java AbstractJournal.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: AbstractBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java,v retrieving revision 1.13 retrieving revision 1.14 diff -C2 -d -r1.13 -r1.14 *** AbstractBufferStrategy.java 15 Mar 2007 16:11:12 -0000 1.13 --- AbstractBufferStrategy.java 27 Mar 2007 17:11:41 -0000 1.14 *************** *** 7,14 **** import java.text.NumberFormat; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; - /** * Abstract base class for {@link IBufferStrategy} implementation. --- 7,16 ---- import java.text.NumberFormat; + import org.apache.log4j.Logger; + + import com.bigdata.objndx.AbstractBTree; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; /** * Abstract base class for {@link IBufferStrategy} implementation. *************** *** 19,22 **** --- 21,29 ---- public abstract class AbstractBufferStrategy implements IBufferStrategy { + /** + * Log for btree opeations. + */ + protected static final Logger log = Logger.getLogger(AbstractBufferStrategy.class); + protected final long initialExtent; protected final long maximumExtent; *************** *** 81,84 **** --- 88,95 ---- * (Re-)open a buffer. * + * @param initialExtent - + * as defined by {@link #getInitialExtent()} + * @param maximumExtent - + * as defined by {@link #getMaximumExtent()}. * @param nextOffset * The next offset within the buffer on which a record will be *************** *** 97,101 **** this.initialExtent = initialExtent; ! this.maximumExtent = maximumExtent; this.nextOffset = nextOffset; --- 108,112 ---- this.initialExtent = initialExtent; ! this.maximumExtent = maximumExtent; // MAY be zero! this.nextOffset = nextOffset; *************** *** 129,132 **** --- 140,145 ---- // Would overflow int32 bytes. + + log.error("Would overflow int32 bytes."); return false; *************** *** 134,141 **** } ! if( required > maximumExtent ) { ! ! // Would exceed the maximum extent. return false; --- 147,156 ---- } ! if( maximumExtent != 0L && required > maximumExtent ) { + // Would exceed the maximum extent (iff a hard limit). + + log.error("Would exceed maximumExtent="+maximumExtent); + return false; Index: IBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IBufferStrategy.java,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -d -r1.16 -r1.17 *** IBufferStrategy.java 11 Mar 2007 11:42:45 -0000 1.16 --- IBufferStrategy.java 27 Mar 2007 17:11:41 -0000 1.17 *************** *** 36,41 **** --- 36,63 ---- public BufferMode getBufferMode(); + /** + * The initial extent. + */ public long getInitialExtent(); + /** + * The maximum extent allowable before a buffer overflow operation will be + * rejected. + * <p> + * Note: The semantics here differ from those defined by + * {@link Options#MAXIMUM_EXTENT}. The latter specifies the threshold at + * which a journal will overflow (onto another journal) while this specifies + * the maximum size to which a buffer is allowed to grow. + * <p> + * Note: This is <em>normally</em> zero (0L), which basically means that + * the maximum extent is ignored by the {@link IBufferStrategy} but + * respected by the {@link AbstractJournal}, resulting in a <i>soft limit</i> + * on journal overflow. The primary reason to limit the buffer size is when + * an in-memory buffer will be converted to a disk-based buffer -- see + * {@link TemporaryRawStore} for an example. + * + * @return The maximum extent permitted for the buffer -or- <code>0L</code> + * iff no limit is imposed. + */ public long getMaximumExtent(); Index: AbstractJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractJournal.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** AbstractJournal.java 27 Mar 2007 14:34:23 -0000 1.7 --- AbstractJournal.java 27 Mar 2007 17:11:41 -0000 1.8 *************** *** 608,612 **** _bufferStrategy = new TransientBufferStrategy(initialExtent, ! maximumExtent, useDirectBuffers); /* --- 608,612 ---- _bufferStrategy = new TransientBufferStrategy(initialExtent, ! 0L/* soft limit for maximumExtent */, useDirectBuffers); /* *************** *** 649,654 **** readOnly, forceWrites); ! _bufferStrategy = new DirectBufferStrategy(maximumExtent, ! fileMetadata); this._rootBlock = fileMetadata.rootBlock; --- 649,654 ---- readOnly, forceWrites); ! _bufferStrategy = new DirectBufferStrategy( ! 0L/* soft limit for maximumExtent */, fileMetadata); this._rootBlock = fileMetadata.rootBlock; *************** *** 669,673 **** readOnly, forceWrites); ! _bufferStrategy = new MappedBufferStrategy(maximumExtent, fileMetadata); --- 669,678 ---- readOnly, forceWrites); ! /* ! * Note: the maximumExtent is a hard limit in this case only since ! * resize is not supported for mapped files. ! */ ! _bufferStrategy = new MappedBufferStrategy( ! maximumExtent /* hard limit for maximum extent */, fileMetadata); *************** *** 689,693 **** readOnly, forceWrites); ! _bufferStrategy = new DiskOnlyStrategy(maximumExtent, fileMetadata); this._rootBlock = fileMetadata.rootBlock; --- 694,699 ---- readOnly, forceWrites); ! _bufferStrategy = new DiskOnlyStrategy( ! 0L/* soft limit for maximumExtent */, fileMetadata); this._rootBlock = fileMetadata.rootBlock; Index: ForceEnum.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ForceEnum.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ForceEnum.java 21 Feb 2007 20:17:21 -0000 1.2 --- ForceEnum.java 27 Mar 2007 17:11:41 -0000 1.3 *************** *** 106,110 **** if( s.equals(Force.name)) return Force; if( s.equals(ForceMetadata.name)) return ForceMetadata; ! throw new IllegalArgumentException(); } --- 106,110 ---- if( s.equals(Force.name)) return Force; if( s.equals(ForceMetadata.name)) return ForceMetadata; ! throw new IllegalArgumentException("Unknown value: "+s); } |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:03
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/test/com/bigdata/scaleup Modified Files: TestMetadataIndex.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: TestMetadataIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestMetadataIndex.java,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** TestMetadataIndex.java 27 Mar 2007 14:34:24 -0000 1.12 --- TestMetadataIndex.java 27 Mar 2007 17:11:42 -0000 1.13 *************** *** 166,171 **** assertEquals(part1,md.get(key0)); ! PartitionMetadata part2 = new PartitionMetadata(partId0,1, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live) }); assertEquals(part1,md.put(key0, part2)); --- 166,175 ---- assertEquals(part1,md.get(key0)); ! final UUID segmentUUID_a = UUID.randomUUID(); ! final UUID segmentUUID_b = UUID.randomUUID(); ! ! PartitionMetadata part2 = new PartitionMetadata(partId0, 1, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L, ! ResourceState.Live, segmentUUID_a) }); assertEquals(part1,md.put(key0, part2)); *************** *** 173,179 **** assertEquals(part2,md.get(key0)); ! PartitionMetadata part3 = new PartitionMetadata(partId0,2, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live), ! new SegmentMetadata("b", 20L,ResourceState.Live) }); assertEquals(part2,md.put(key0, part3)); --- 177,186 ---- assertEquals(part2,md.get(key0)); ! PartitionMetadata part3 = new PartitionMetadata(partId0, 2, ! new SegmentMetadata[] { ! new SegmentMetadata("a", 10L, ResourceState.Live, ! segmentUUID_a), ! new SegmentMetadata("b", 20L, ResourceState.Live, ! segmentUUID_b) }); assertEquals(part2,md.put(key0, part3)); *************** *** 248,254 **** assertEquals(part0,md.get(key0)); final int partId1 = 1; PartitionMetadata part1 = new PartitionMetadata(partId1,1, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live) }); assertEquals(null,md.put(key1, part1)); assertEquals(part1,md.get(key1)); --- 255,264 ---- assertEquals(part0,md.get(key0)); + final UUID segmentUUID_a = UUID.randomUUID(); + final UUID segmentUUID_b = UUID.randomUUID(); + final int partId1 = 1; PartitionMetadata part1 = new PartitionMetadata(partId1,1, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a) }); assertEquals(null,md.put(key1, part1)); assertEquals(part1,md.get(key1)); *************** *** 256,261 **** final int partId2 = 2; PartitionMetadata part2 = new PartitionMetadata(partId2,2, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live), ! new SegmentMetadata("b", 20L,ResourceState.Live) }); assertEquals(null, md.put(key2, part2)); assertEquals(part2, md.get(key2)); --- 266,271 ---- final int partId2 = 2; PartitionMetadata part2 = new PartitionMetadata(partId2,2, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a), ! new SegmentMetadata("b", 20L,ResourceState.Live, segmentUUID_b) }); assertEquals(null, md.put(key2, part2)); assertEquals(part2, md.get(key2)); *************** *** 340,347 **** assertEquals(null,md.put(key0, part0)); assertEquals(part0,md.get(key0)); final int partId1 = 1; PartitionMetadata part1 = new PartitionMetadata(partId1,1, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live) }); assertEquals(null,md.put(key1, part1)); assertEquals(part1,md.get(key1)); --- 350,360 ---- assertEquals(null,md.put(key0, part0)); assertEquals(part0,md.get(key0)); + + final UUID segmentUUID_a = UUID.randomUUID(); + final UUID segmentUUID_b = UUID.randomUUID(); final int partId1 = 1; PartitionMetadata part1 = new PartitionMetadata(partId1,1, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a) }); assertEquals(null,md.put(key1, part1)); assertEquals(part1,md.get(key1)); *************** *** 349,354 **** final int partId2 = 2; PartitionMetadata part2 = new PartitionMetadata(partId2,2, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live), ! new SegmentMetadata("b", 20L,ResourceState.Live) }); assertEquals(null, md.put(key2, part2)); assertEquals(part2,md.get(key2)); --- 362,367 ---- final int partId2 = 2; PartitionMetadata part2 = new PartitionMetadata(partId2,2, ! new SegmentMetadata[] { new SegmentMetadata("a", 10L,ResourceState.Live, segmentUUID_a), ! new SegmentMetadata("b", 20L,ResourceState.Live, segmentUUID_b) }); assertEquals(null, md.put(key2, part2)); assertEquals(part2,md.get(key2)); *************** *** 465,476 **** assertTrue(!outFile.exists() || outFile.delete()); ! new IndexSegmentBuilder(outFile,null,btree,100,0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0,1, new SegmentMetadata[] { new SegmentMetadata("" + outFile, ! outFile.length(),ResourceState.Live) })); /* --- 478,490 ---- assertTrue(!outFile.exists() || outFile.delete()); ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile,null,btree,100,0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 1, new SegmentMetadata[] { new SegmentMetadata("" + outFile, ! outFile.length(), ResourceState.Live, ! builder.segmentUUID) })); /* *************** *** 574,585 **** assertTrue(!outFile01.exists() || outFile01.delete()); ! new IndexSegmentBuilder(outFile01,null,btree,100,0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0,2, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, ! outFile01.length(),ResourceState.Live) })); /* --- 588,600 ---- assertTrue(!outFile01.exists() || outFile01.delete()); ! IndexSegmentBuilder builder1 = new IndexSegmentBuilder(outFile01,null,btree,100,0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 2, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, ! outFile01.length(), ResourceState.Live, ! builder1.segmentUUID) })); /* *************** *** 623,631 **** .merge(); ! new IndexSegmentBuilder(outFile02, null, mergeItr.nentries, ! new MergedEntryIterator(mergeItr), 100, btree ! .getNodeSerializer().getValueSerializer(), ! false/* useChecksum */, null/* recordCompressor */, 0d/* errorRate */, ! btree.getIndexUUID()); /* --- 638,646 ---- .merge(); ! IndexSegmentBuilder builder2 = new IndexSegmentBuilder(outFile02, null, ! mergeItr.nentries, new MergedEntryIterator(mergeItr), 100, ! btree.getNodeSerializer().getValueSerializer(), ! false/* useChecksum */, null/* recordCompressor */, ! 0d/* errorRate */, btree.getIndexUUID()); /* *************** *** 635,641 **** * has been replaced by the merged result (index segment 02). */ ! md.put(new byte[] {}, new PartitionMetadata(0, 3, new SegmentMetadata[] { ! new SegmentMetadata("" + outFile01, outFile01.length(),ResourceState.Dead), ! new SegmentMetadata("" + outFile02, outFile02.length(),ResourceState.Live) })); /* --- 650,659 ---- * has been replaced by the merged result (index segment 02). */ ! md.put(new byte[] {}, new PartitionMetadata(0, 3, ! new SegmentMetadata[] { ! new SegmentMetadata("" + outFile01, outFile01.length(), ! ResourceState.Dead, builder1.segmentUUID), ! new SegmentMetadata("" + outFile02, outFile02.length(), ! ResourceState.Live, builder2.segmentUUID) })); /* *************** *** 846,859 **** assertTrue(!outFile01.exists() || outFile01.delete()); ! new IndexSegmentBuilder(outFile01, null, testData, mseg, 0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, ! new PartitionMetadata(0,2, ! new SegmentMetadata[] { new SegmentMetadata("" ! + outFile01, outFile01.length(), ! ResourceState.Live) })); /* --- 864,876 ---- assertTrue(!outFile01.exists() || outFile01.delete()); ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile01, null, testData, mseg, 0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 2, ! new SegmentMetadata[] { new SegmentMetadata("" ! + outFile01, outFile01.length(), ! ResourceState.Live, builder.segmentUUID) })); /* *************** *** 920,924 **** new SegmentMetadata[] { new SegmentMetadata("" + outFile02, outFile02 ! .length(), ResourceState.Live) })); /* --- 937,941 ---- new SegmentMetadata[] { new SegmentMetadata("" + outFile02, outFile02 ! .length(), ResourceState.Live, builder.segmentUUID) })); /* *************** *** 1050,1054 **** assertTrue(!outFile01.exists() || outFile01.delete()); ! new IndexSegmentBuilder(outFile01,null,btree,100,0d); /* --- 1067,1071 ---- assertTrue(!outFile01.exists() || outFile01.delete()); ! IndexSegmentBuilder builder1 = new IndexSegmentBuilder(outFile01,null,btree,100,0d); /* *************** *** 1057,1061 **** md.put(new byte[] {}, new PartitionMetadata(0,1, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, ! outFile01.length(),ResourceState.Live) })); /* --- 1074,1078 ---- md.put(new byte[] {}, new PartitionMetadata(0,1, new SegmentMetadata[] { new SegmentMetadata("" + outFile01, ! outFile01.length(),ResourceState.Live, builder1.segmentUUID) })); /* *************** *** 1094,1105 **** assertTrue(!outFile02.exists() || outFile02.delete()); ! new IndexSegmentBuilder(outFile02, null, btree, 100, 0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 1, new SegmentMetadata[] { ! new SegmentMetadata("" + outFile01, outFile01.length(),ResourceState.Live), ! new SegmentMetadata("" + outFile02, outFile02.length(),ResourceState.Live) })); /* --- 1111,1125 ---- assertTrue(!outFile02.exists() || outFile02.delete()); ! IndexSegmentBuilder builder2 = new IndexSegmentBuilder(outFile02, null, btree, 100, 0d); /* * update the metadata index for this partition. */ ! md.put(new byte[] {}, new PartitionMetadata(0, 1, ! new SegmentMetadata[] { ! new SegmentMetadata("" + outFile01, outFile01.length(), ! ResourceState.Live, builder1.segmentUUID), ! new SegmentMetadata("" + outFile02, outFile02.length(), ! ResourceState.Live, builder2.segmentUUID) })); /* |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:03
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/test/com/bigdata/objndx Modified Files: TestRestartSafe.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: TestRestartSafe.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestRestartSafe.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** TestRestartSafe.java 27 Mar 2007 14:34:21 -0000 1.8 --- TestRestartSafe.java 27 Mar 2007 17:11:42 -0000 1.9 *************** *** 58,61 **** --- 58,62 ---- import com.bigdata.journal.Journal; import com.bigdata.journal.Options; + import com.bigdata.rawstore.IRawStore; /** *************** *** 217,220 **** --- 218,306 ---- } + + /** + * Test verifies that classes which extend {@link BTree} are correctly + * restored by {@link BTree#load(com.bigdata.rawstore.IRawStore, long)}. + */ + public void test_restartSafeSubclass() { + + Journal journal = new Journal(getProperties()); + + final int m = 3; + + final long addr1; + + SimpleEntry v1 = new SimpleEntry(1); + SimpleEntry v2 = new SimpleEntry(2); + SimpleEntry v3 = new SimpleEntry(3); + SimpleEntry v4 = new SimpleEntry(4); + SimpleEntry v5 = new SimpleEntry(5); + SimpleEntry v6 = new SimpleEntry(6); + SimpleEntry v7 = new SimpleEntry(7); + SimpleEntry v8 = new SimpleEntry(8); + Object[] values = new Object[] { v5, v6, v7, v8, v3, v4, v2, v1 }; + + { + + final BTree btree = new MyBTree(journal, 3, UUID.randomUUID(), + SimpleEntry.Serializer.INSTANCE); + + byte[][] keys = new byte[][] { new byte[] { 5 }, new byte[] { 6 }, + new byte[] { 7 }, new byte[] { 8 }, new byte[] { 3 }, + new byte[] { 4 }, new byte[] { 2 }, new byte[] { 1 } }; + + btree.insert(new BatchInsert(values.length, keys, values)); + + assertTrue(btree.dump(Level.DEBUG, System.err)); + + // @todo verify in more detail. + assertSameIterator(new Object[] { v1, v2, v3, v4, v5, v6, v7, v8 }, + btree.entryIterator()); + + addr1 = btree.write(); + + journal.commit(); + + } + + /* + * restart, re-opening the same file. + */ + { + + journal = reopenStore(journal); + + final MyBTree btree = (MyBTree) BTree.load(journal, addr1); + + assertTrue(btree.dump(Level.DEBUG, System.err)); + + // @todo verify in more detail. + assertSameIterator(new Object[] { v1, v2, v3, v4, v5, v6, v7, v8 }, + btree.entryIterator()); + + journal.closeAndDelete(); + + } + + } + + public static class MyBTree extends BTree { + + public MyBTree(IRawStore store, int branchingFactor, UUID indexUUID, + IValueSerializer valSer) { + + super(store, branchingFactor, indexUUID, valSer); + + } + + /** + * @param store + * @param metadata + */ + public MyBTree(IRawStore store, BTreeMetadata metadata) { + super(store, metadata); + } + + } } |
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/java/com/bigdata/scaleup Modified Files: PartitionMetadata.java MasterJournal.java SegmentMetadata.java AbstractPartitionTask.java IResourceMetadata.java JournalMetadata.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: JournalMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/JournalMetadata.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** JournalMetadata.java 8 Mar 2007 18:14:06 -0000 1.1 --- JournalMetadata.java 27 Mar 2007 17:11:41 -0000 1.2 *************** *** 45,48 **** --- 45,49 ---- import java.io.File; + import java.util.UUID; import com.bigdata.journal.Journal; *************** *** 61,65 **** protected final String filename; protected final ResourceState state; ! public File getFile() { return new File(filename); --- 62,67 ---- protected final String filename; protected final ResourceState state; ! protected final UUID uuid; ! public File getFile() { return new File(filename); *************** *** 78,81 **** --- 80,87 ---- } + public UUID getUUID() { + return uuid; + } + public JournalMetadata(Journal journal, ResourceState state) { *************** *** 90,94 **** this.state = state; } ! } \ No newline at end of file --- 96,102 ---- this.state = state; + this.uuid = journal.getRootBlockView().getUUID(); + } ! } Index: IResourceMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IResourceMetadata.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** IResourceMetadata.java 8 Mar 2007 18:14:06 -0000 1.1 --- IResourceMetadata.java 27 Mar 2007 17:11:41 -0000 1.2 *************** *** 45,51 **** --- 45,53 ---- import java.io.File; + import java.util.UUID; import com.bigdata.journal.Journal; import com.bigdata.objndx.IndexSegment; + import com.bigdata.objndx.IndexSegmentMetadata; /** *************** *** 72,75 **** --- 74,83 ---- public ResourceState state(); + /** + * The unique identifier for the resource (the UUID found in either the + * journal root block or the {@link IndexSegmentMetadata}). + */ + public UUID getUUID(); + // public int hashCode(); // Index: AbstractPartitionTask.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/AbstractPartitionTask.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** AbstractPartitionTask.java 27 Mar 2007 14:34:22 -0000 1.3 --- AbstractPartitionTask.java 27 Mar 2007 17:11:41 -0000 1.4 *************** *** 257,268 **** File outFile = master.getSegmentFile(name, partId, segId); ! new IndexSegmentBuilder(outFile, master.tmpDir, src.rangeCount( ! fromKey, toKey), src.rangeIterator(fromKey, toKey), ! branchingFactor, valSer, useChecksum, recordCompressor, ! errorRate, indexUUID); IResourceMetadata[] resources = new SegmentMetadata[] { new SegmentMetadata( ! "" + outFile, outFile.length(), ! ResourceState.New) }; updatePartition(resources); --- 257,268 ---- File outFile = master.getSegmentFile(name, partId, segId); ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, ! master.tmpDir, src.rangeCount(fromKey, toKey), src ! .rangeIterator(fromKey, toKey), branchingFactor, ! valSer, useChecksum, recordCompressor, errorRate, indexUUID); IResourceMetadata[] resources = new SegmentMetadata[] { new SegmentMetadata( ! "" + outFile, outFile.length(), ResourceState.New, ! builder.segmentUUID) }; updatePartition(resources); *************** *** 345,351 **** // build the merged index segment. ! new IndexSegmentBuilder(outFile, null, mergeItr.nentries, ! new MergedEntryIterator(mergeItr), branchingFactor, valSer, ! useChecksum, recordCompressor, errorRate, indexUUID); // close the merged leaf iterator (and release its buffer/file). --- 345,352 ---- // build the merged index segment. ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, ! null, mergeItr.nentries, new MergedEntryIterator(mergeItr), ! branchingFactor, valSer, useChecksum, recordCompressor, ! errorRate, indexUUID); // close the merged leaf iterator (and release its buffer/file). *************** *** 388,395 **** newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, ! ResourceState.Dead); ! newSegs[1] = new SegmentMetadata(outFile.toString(), outFile.length(), ! ResourceState.Live); mdi.put(fromKey, new PartitionMetadata(0, segId + 1, newSegs)); --- 389,396 ---- newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, ! ResourceState.Dead, oldSeg.uuid); ! newSegs[1] = new SegmentMetadata(outFile.toString(), outFile ! .length(), ResourceState.Live, builder.segmentUUID); mdi.put(fromKey, new PartitionMetadata(0, segId + 1, newSegs)); Index: SegmentMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/SegmentMetadata.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** SegmentMetadata.java 8 Mar 2007 18:14:05 -0000 1.3 --- SegmentMetadata.java 27 Mar 2007 17:11:41 -0000 1.4 *************** *** 45,48 **** --- 45,49 ---- import java.io.File; + import java.util.UUID; import com.bigdata.objndx.IndexSegment; *************** *** 71,75 **** final public ResourceState state; ! public SegmentMetadata(String filename,long nbytes,ResourceState state) { this.filename = filename; --- 72,78 ---- final public ResourceState state; ! final public UUID uuid; ! ! public SegmentMetadata(String filename,long nbytes,ResourceState state, UUID uuid ) { this.filename = filename; *************** *** 79,82 **** --- 82,87 ---- this.state = state; + this.uuid = uuid; + } *************** *** 88,92 **** SegmentMetadata o2 = (SegmentMetadata)o; ! if(filename.equals(o2.filename) && nbytes==o2.nbytes && state == o2.state) return true; return false; --- 93,99 ---- SegmentMetadata o2 = (SegmentMetadata)o; ! if (filename.equals(o2.filename) && nbytes == o2.nbytes ! && state == o2.state && uuid.equals(o2.uuid)) ! return true; return false; *************** *** 105,108 **** --- 112,119 ---- return state; } + + public UUID getUUID() { + return uuid; + } } Index: PartitionMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionMetadata.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** PartitionMetadata.java 8 Mar 2007 18:14:06 -0000 1.3 --- PartitionMetadata.java 27 Mar 2007 17:11:41 -0000 1.4 *************** *** 46,50 **** --- 46,52 ---- import java.io.DataInputStream; import java.io.DataOutputStream; + import java.io.Externalizable; import java.io.IOException; + import java.util.UUID; import com.bigdata.objndx.IValueSerializer; *************** *** 52,58 **** /** ! * A description of the {@link IndexSegment}s containing the user data for ! * a partition. ! * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ --- 54,63 ---- /** ! * A description of the {@link IndexSegment}s containing the user data for a ! * partition. ! * ! * FIXME add ordered UUID[] of the data services on which the index partition ! * has been mapped. ! * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ *************** *** 267,270 **** --- 272,280 ---- * Serialization for an index segment metadata entry. * + * FIXME implement {@link Externalizable} and use explicit versioning. + * + * FIXME assumes that resources are {@link IndexSegment}s rather than + * either index segments or journals. + * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ *************** *** 307,310 **** --- 317,324 ---- os.writeInt(segmentMetadata.state.valueOf()); + os.writeLong(segmentMetadata.uuid.getMostSignificantBits()); + + os.writeLong(segmentMetadata.uuid.getLeastSignificantBits()); + } *************** *** 336,340 **** .valueOf(is.readInt()); ! val.segs[j] = new SegmentMetadata(filename, nbytes, state); } --- 350,356 ---- .valueOf(is.readInt()); ! UUID uuid = new UUID(is.readLong()/*MSB*/,is.readLong()/*LSB*/); ! ! val.segs[j] = new SegmentMetadata(filename, nbytes, state, uuid); } Index: MasterJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/MasterJournal.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** MasterJournal.java 27 Mar 2007 14:34:22 -0000 1.4 --- MasterJournal.java 27 Mar 2007 17:11:41 -0000 1.5 *************** *** 919,925 **** File outFile = getSegmentFile(name,pmd.partId,segId); ! new IndexSegmentBuilder(outFile, tmpDir, oldIndex.btree ! .getEntryCount(), oldIndex.btree.getRoot().entryIterator(), ! mseg, Value.Serializer.INSTANCE, true/* useChecksum */, null/* new RecordCompressor() */, 0d, oldIndex.btree .getIndexUUID()); --- 919,926 ---- File outFile = getSegmentFile(name,pmd.partId,segId); ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, ! tmpDir, oldIndex.btree.getEntryCount(), oldIndex.btree ! .getRoot().entryIterator(), mseg, ! Value.Serializer.INSTANCE, true/* useChecksum */, null/* new RecordCompressor() */, 0d, oldIndex.btree .getIndexUUID()); *************** *** 928,939 **** * update the metadata index for this partition. */ ! mdi.put(separatorKey, ! new PartitionMetadata(0, segId + 1, ! new SegmentMetadata[] { new SegmentMetadata("" ! + outFile, outFile.length(), ! ResourceState.Live) })); ! // /* ! // * open and verify the index segment against the btree data. // */ // seg = new IndexSegment(new IndexSegmentFileStore(outFile01), btree --- 929,939 ---- * update the metadata index for this partition. */ ! mdi.put(separatorKey, new PartitionMetadata(0, segId + 1, ! new SegmentMetadata[] { new SegmentMetadata("" + outFile, ! outFile.length(), ResourceState.Live, ! builder.segmentUUID) })); ! // /* ! // * open and verify the index segment against the btree data. // */ // seg = new IndexSegment(new IndexSegmentFileStore(outFile01), btree *************** *** 963,971 **** // build the merged index segment. ! new IndexSegmentBuilder(outFile, null, mergeItr.nentries, ! new MergedEntryIterator(mergeItr), mseg, oldIndex.btree ! .getNodeSerializer().getValueSerializer(), ! false/* useChecksum */, null/* recordCompressor */, ! 0d/* errorRate */, oldIndex.btree.getIndexUUID()); // close the merged leaf iterator (and release its buffer/file). --- 963,972 ---- // build the merged index segment. ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, ! null, mergeItr.nentries, new MergedEntryIterator(mergeItr), ! mseg, oldIndex.btree.getNodeSerializer() ! .getValueSerializer(), false/* useChecksum */, ! null/* recordCompressor */, 0d/* errorRate */, ! oldIndex.btree.getIndexUUID()); // close the merged leaf iterator (and release its buffer/file). *************** *** 1001,1008 **** newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, ! ResourceState.Dead); ! newSegs[1] = new SegmentMetadata(outFile.toString(), outFile.length(), ! ResourceState.Live); mdi.put(separatorKey, new PartitionMetadata(0, segId + 1, newSegs)); --- 1002,1009 ---- newSegs[0] = new SegmentMetadata(oldSeg.filename, oldSeg.nbytes, ! ResourceState.Dead, oldSeg.uuid); ! newSegs[1] = new SegmentMetadata(outFile.toString(), outFile ! .length(), ResourceState.Live, builder.segmentUUID); mdi.put(separatorKey, new PartitionMetadata(0, segId + 1, newSegs)); |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:03
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/service In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/java/com/bigdata/service Modified Files: MetadataService.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: MetadataService.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/service/MetadataService.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** MetadataService.java 27 Mar 2007 14:34:23 -0000 1.3 --- MetadataService.java 27 Mar 2007 17:11:42 -0000 1.4 *************** *** 61,72 **** * @version $Id$ * - * FIXME Tag each index with a UUID. The UUID needs to appear in the index - * metadata record for each journal and index segment. When it is an named - * (scale-out) index, the UUID of the scale-out index must be used for each - * B+Tree metadata record having data for that index. This allows us to map - * backwards from the data structures to the metadata index. Document this in - * the UML model. (I still need to get the correct index UUID to each BTree - * constuctor since they are all using a Random UUID right now.) - * * @todo Provide a means to reconstruct the metadata index from the journal and * index segment data files. We tag each journal and index segment with a --- 61,64 ---- |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:03
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv3355/src/java/com/bigdata/rdf Modified Files: TripleStore.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: TripleStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/TripleStore.java,v retrieving revision 1.24 retrieving revision 1.25 diff -C2 -d -r1.24 -r1.25 *** TripleStore.java 27 Mar 2007 14:35:08 -0000 1.24 --- TripleStore.java 27 Mar 2007 17:11:48 -0000 1.25 *************** *** 90,93 **** --- 90,94 ---- import com.bigdata.rdf.serializers.StatementSerializer; import com.bigdata.rdf.serializers.TermIdSerializer; + import com.bigdata.scaleup.MasterJournal; import com.bigdata.scaleup.PartitionedIndexView; import com.bigdata.scaleup.SlaveJournal; *************** *** 201,205 **** * @version $Id$ */ ! public class TripleStore extends /*Partitioned*/Journal { /** --- 202,206 ---- * @version $Id$ */ ! public class TripleStore extends MasterJournal { /** *************** *** 300,304 **** * Returns and creates iff necessary a scalable restart safe index for RDF * {@link _Statement statements}. ! * @param name The name of the index. * @return The index. * --- 301,308 ---- * Returns and creates iff necessary a scalable restart safe index for RDF * {@link _Statement statements}. ! * ! * @param name ! * The name of the index. ! * * @return The index. * |
From: Bryan T. <tho...@us...> - 2007-03-27 17:12:03
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv2900/src/java/com/bigdata/objndx Modified Files: IndexSegmentBuilder.java IndexSegmentMetadata.java Log Message: Corrected problem in the interpretation of maximumExtent for an IBufferStrategy vs an IJournal. Working through use of isolatable indices for the triple store. Index: IndexSegmentBuilder.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/IndexSegmentBuilder.java,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -d -r1.29 -r1.30 *** IndexSegmentBuilder.java 27 Mar 2007 14:34:22 -0000 1.29 --- IndexSegmentBuilder.java 27 Mar 2007 17:11:41 -0000 1.30 *************** *** 199,202 **** --- 199,209 ---- /** + * The unique identifier for the generated {@link IndexSegment} resource. + * + * @see #indexUUID + */ + final public UUID segmentUUID; + + /** * The unique identifier for the index whose data is stored in this B+Tree * data structure. When using a scale-out index the same <i>indexUUID</i> *************** *** 205,210 **** * backwards from the B+Tree data structures and identify the index to which * they belong. */ ! final protected UUID indexUUID; /** --- 212,219 ---- * backwards from the B+Tree data structures and identify the index to which * they belong. + * + * @see #segmentUUID */ ! final public UUID indexUUID; /** *************** *** 225,234 **** final BloomFilter bloomFilter; - // /** - // * When non-null, a map containing extension metadata. This is set by the - // * constructor. - // */ - // final Map<String, Serializable> metadataMap; - /** * The offset in the output file of the last leaf written onto that file. --- 234,237 ---- *************** *** 470,478 **** * @throws IOException */ - // * @param metadataMap - // * An optional serializable map containing application defined - // * extension metadataMap. The map will be serialized with the - // * {@link IndexSegmentExtensionMetadata} object as part of the - // * {@link IndexSegmentFileStore}. public IndexSegmentBuilder(File outFile, File tmpDir, final int entryCount, IEntryIterator entryIterator, final int m, --- 473,476 ---- *************** *** 480,484 **** RecordCompressor recordCompressor, final double errorRate, final UUID indexUUID - // , final Map<String, Serializable> metadataMap ) throws IOException { --- 478,481 ---- *************** *** 494,497 **** --- 491,495 ---- this.useChecksum = useChecksum; this.recordCompressor = recordCompressor; + this.segmentUUID = UUID.randomUUID(); this.indexUUID = indexUUID; *************** *** 564,569 **** } - // this.metadataMap = metadataMap; - // Used to serialize the nodes and leaves for the output tree. nodeSer = new NodeSerializer(NOPNodeFactory.INSTANCE, --- 562,565 ---- *************** *** 1440,1444 **** IndexSegmentExtensionMetadata extensionMetadata = new IndexSegmentExtensionMetadata( cl, nodeSer.valueSerializer, nodeSer.recordCompressor); - // metadataMap); final byte[] extensionMetadataBytes = SerializerUtil --- 1436,1439 ---- *************** *** 1475,1479 **** plan.nentries, maxNodeOrLeafLength, addrLeaves, addrNodes, addrRoot, addrExtensionMetadata, addrBloom, errorRate, out ! .length(), indexUUID, now); md.write(out); --- 1470,1474 ---- plan.nentries, maxNodeOrLeafLength, addrLeaves, addrNodes, addrRoot, addrExtensionMetadata, addrBloom, errorRate, out ! .length(), segmentUUID, indexUUID, now); md.write(out); Index: IndexSegmentMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/IndexSegmentMetadata.java,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** IndexSegmentMetadata.java 27 Mar 2007 14:34:23 -0000 1.12 --- IndexSegmentMetadata.java 27 Mar 2007 17:11:41 -0000 1.13 *************** *** 286,290 **** int maxNodeOrLeafLength, long addrLeaves, long addrNodes, long addrRoot, long addrExtensionMetadata, long addrBloom, ! double errorRate, long length, UUID indexUUID, long timestamp) { assert branchingFactor >= BTree.MIN_BRANCHING_FACTOR; --- 286,291 ---- int maxNodeOrLeafLength, long addrLeaves, long addrNodes, long addrRoot, long addrExtensionMetadata, long addrBloom, ! double errorRate, long length, UUID indexUUID, UUID segmentUUID, ! long timestamp) { assert branchingFactor >= BTree.MIN_BRANCHING_FACTOR; *************** *** 327,331 **** assert timestamp != 0L; ! this.segmentUUID = UUID.randomUUID(); this.branchingFactor = branchingFactor; --- 328,336 ---- assert timestamp != 0L; ! assert segmentUUID != null; ! ! assert indexUUID != null; ! ! this.segmentUUID = segmentUUID; this.branchingFactor = branchingFactor; |
From: Bryan T. <tho...@us...> - 2007-03-27 14:35:16
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv6717/src/test/com/bigdata/rdf/inf Modified Files: TestMagicSets.java Log Message: Added indexUUID to AbstractBTree so that each scale-out index may have a unique indentifier. Modified the BTreeMetadata class and derived classes to use Externalizable, to support explicit versioning of the metadata record, and to have private fields since they can not be final with Externalizable. Index: TestMagicSets.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf/TestMagicSets.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** TestMagicSets.java 22 Feb 2007 16:58:58 -0000 1.6 --- TestMagicSets.java 27 Mar 2007 14:35:08 -0000 1.7 *************** *** 55,58 **** --- 55,59 ---- import org.openrdf.vocabulary.RDFS; + import com.bigdata.rdf.model.OptimizedValueFactory._URI; import com.bigdata.rdf.TempTripleStore; import com.bigdata.rdf.TripleStore; *************** *** 187,201 **** * setup the database. */ ! URI x = new URIImpl("http://www.foo.org/x"); ! URI y = new URIImpl("http://www.foo.org/y"); ! URI z = new URIImpl("http://www.foo.org/z"); ! URI A = new URIImpl("http://www.foo.org/A"); ! URI B = new URIImpl("http://www.foo.org/B"); ! URI C = new URIImpl("http://www.foo.org/C"); ! URI rdfType = new URIImpl(RDF.TYPE); ! URI rdfsSubClassOf = new URIImpl(RDFS.SUBCLASSOF); store.addStatement(x, rdfType, C); --- 188,202 ---- * setup the database. */ ! URI x = new _URI("http://www.foo.org/x"); ! URI y = new _URI("http://www.foo.org/y"); ! URI z = new _URI("http://www.foo.org/z"); ! URI A = new _URI("http://www.foo.org/A"); ! URI B = new _URI("http://www.foo.org/B"); ! URI C = new _URI("http://www.foo.org/C"); ! URI rdfType = new _URI(RDF.TYPE); ! URI rdfsSubClassOf = new _URI(RDFS.SUBCLASSOF); store.addStatement(x, rdfType, C); *************** *** 221,225 **** // query :- triple(?s,rdf:type,A). Triple query = new Triple(store.nextVar(), store.rdfType, new Id(store ! .addTerm(new URIImpl("http://www.foo.org/A")))); // Run the queryy. --- 222,226 ---- // query :- triple(?s,rdf:type,A). Triple query = new Triple(store.nextVar(), store.rdfType, new Id(store ! .addTerm(new _URI("http://www.foo.org/A")))); // Run the queryy. |