You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(414) |
Apr
(123) |
May
(448) |
Jun
(180) |
Jul
(17) |
Aug
(49) |
Sep
(3) |
Oct
(92) |
Nov
(101) |
Dec
(64) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(132) |
Feb
(230) |
Mar
(146) |
Apr
(146) |
May
|
Jun
|
Jul
(34) |
Aug
(4) |
Sep
(3) |
Oct
(10) |
Nov
(12) |
Dec
(24) |
2008 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(11) |
Nov
(4) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Bryan T. <tho...@us...> - 2007-03-15 16:11:47
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/java/com/bigdata/scaleup Modified Files: MasterJournal.java SlaveJournal.java Added Files: IMetadataIndexManager.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... --- NEW FILE: IMetadataIndexManager.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 13, 2007 */ package com.bigdata.scaleup; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.IJournal; /** * An interface for managing global definitions of scalable indices including * index partitions. In a scale-out architecture, this interface is realized as * a robust service with failover. The client protocol for this service should * use pre-fetch and leases to reduce latency and enable clients to talk * directly with data services to the greatest extent possible. During the term * of a lease, a client may safely presume that any index partition definitions * covered by that lease are valid and therefore may issue operations directly * to the appropriate data service instances. However, once a lease has expired * a client must verify that index partition definitions before issuing * operations against those partitions. The duration of a lease is managed so as * to balance the time between the completion of an index build task, at which * point the index partition definitions are updated and can be immediately used * by new clients, and the release of the inputs to the index build task (at * which point the old index partition definition is no longer useable). * <p> * Note that this interface extends the semantics of {@link IIndexManager} to * provide a global registration or removal of an index while the latter * operations in a purely local manner on an {@link IJournal}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public interface IMetadataIndexManager extends IIndexManager { /* * Note: IIndexManager is implemented by AbstractJournal but the semantics * of this implementation are purely local -- indices are registered on or * dropped from the specific journal instance only not the distributed * database. An IIndexManager service is responsible for a distinct global * registration of an index and for managing the mapping of the index onto * journals and index segments via the metadata index. */ } Index: SlaveJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/SlaveJournal.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** SlaveJournal.java 11 Mar 2007 11:42:42 -0000 1.4 --- SlaveJournal.java 15 Mar 2007 16:11:10 -0000 1.5 *************** *** 52,55 **** --- 52,56 ---- import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr; + import com.bigdata.journal.ResourceManager; import com.bigdata.objndx.BTree; import com.bigdata.objndx.IEntryIterator; *************** *** 83,87 **** private final MasterJournal master; ! protected MasterJournal getMaster() { --- 84,88 ---- private final MasterJournal master; ! protected MasterJournal getMaster() { *************** *** 100,104 **** } ! /** * The overflow event is delegated to the master. --- 101,105 ---- } ! /** * The overflow event is delegated to the master. *************** *** 106,109 **** --- 107,113 ---- public void overflow() { + // handles event reporting. + super.overflow(); + master.overflow(); Index: MasterJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/MasterJournal.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** MasterJournal.java 11 Mar 2007 11:42:38 -0000 1.1 --- MasterJournal.java 15 Mar 2007 16:11:10 -0000 1.2 *************** *** 66,69 **** --- 66,70 ---- import com.bigdata.journal.IsolationEnum; import com.bigdata.journal.Journal; + import com.bigdata.journal.ResourceManager; import com.bigdata.journal.Name2Addr.Entry; import com.bigdata.objndx.AbstractBTree; *************** *** 676,682 **** oldJournal.closeAndDelete(); - // // delete the old backing file (if any). - // oldJournal.getBufferStrategy().deleteFile(); - } --- 677,680 ---- *************** *** 1263,1270 **** } - public long newTx() { - return slave.newTx(); - } - public long newTx(IsolationEnum level) { return slave.newTx(level); --- 1261,1264 ---- *************** *** 1320,1326 **** if( seg == null ) { ! seg = new IndexSegmentFileStore(resource.getFile()).load(); resourceCache.put(filename, seg, false); --- 1314,1326 ---- if( seg == null ) { + + // open the file. + IndexSegmentFileStore fileStore = new IndexSegmentFileStore( + resource.getFile()); ! // load the btree. ! seg = fileStore.load(); + // save in the cache. resourceCache.put(filename, seg, false); |
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/test/com/bigdata/journal Modified Files: TestTransactionServer.java AbstractTestTxRunState.java TestConflictResolution.java TestTx.java StressTestConcurrent.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... Index: TestConflictResolution.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestConflictResolution.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TestConflictResolution.java 22 Feb 2007 16:59:34 -0000 1.4 --- TestConflictResolution.java 15 Mar 2007 16:11:09 -0000 1.5 *************** *** 152,158 **** */ ! final long tx1 = journal.newTx(); ! final long tx2 = journal.newTx(); /* --- 152,158 ---- */ ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); /* *************** *** 227,233 **** */ ! final long tx1 = journal.newTx(); ! final long tx2 = journal.newTx(); /* --- 227,233 ---- */ ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); /* *************** *** 296,302 **** // */ // ! // final long tx1 = journal.newTx(); // ! // final long tx2 = journal.newTx(); // // /* --- 296,302 ---- // */ // ! // final long tx1 = journal.newTx(IsolationEnum.ReadWrite); // ! // final long tx2 = journal.newTx(IsolationEnum.ReadWrite); // // /* Index: TestTransactionServer.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTransactionServer.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestTransactionServer.java 28 Feb 2007 13:59:09 -0000 1.3 --- TestTransactionServer.java 15 Mar 2007 16:11:09 -0000 1.4 *************** *** 49,56 **** import junit.framework.TestCase; /** ! * Test suite for semantics of the {@link TransactionServer}. * * @todo The test suite has to cover the basic API, including correct generation --- 49,58 ---- + import com.bigdata.service.OldTransactionServer; + import junit.framework.TestCase; /** ! * Test suite for semantics of the {@link OldTransactionServer}. * * @todo The test suite has to cover the basic API, including correct generation *************** *** 81,85 **** public void test_ctor() { ! new TransactionServer(); } --- 83,87 ---- public void test_ctor() { ! new OldTransactionServer(); } *************** *** 99,103 **** public void test001() { ! TransactionServer server = new MyTransactionServer(); /* groundState is t0. --- 101,105 ---- public void test001() { ! OldTransactionServer server = new MyTransactionServer(); /* groundState is t0. *************** *** 149,153 **** * @version $Id$ */ ! static class MyTransactionServer extends TransactionServer { public MyTransactionServer() { --- 151,155 ---- * @version $Id$ */ ! static class MyTransactionServer extends OldTransactionServer { public MyTransactionServer() { Index: TestTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTx.java,v retrieving revision 1.20 retrieving revision 1.21 diff -C2 -d -r1.20 -r1.21 *** TestTx.java 11 Mar 2007 11:42:34 -0000 1.20 --- TestTx.java 15 Mar 2007 16:11:09 -0000 1.21 *************** *** 96,100 **** journal.commit(); ! final long tx = journal.newTx(); /* --- 96,100 ---- journal.commit(); ! final long tx = journal.newTx(IsolationEnum.ReadWrite); /* *************** *** 124,128 **** // start tx1. ! final long tx1 = journal.newTx(); // the index is not visible in tx1. --- 124,128 ---- // start tx1. ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); // the index is not visible in tx1. *************** *** 133,137 **** // start tx2. ! final long tx2 = journal.newTx(); // the index still is not visible in tx1. --- 133,137 ---- // start tx2. ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); // the index still is not visible in tx1. *************** *** 169,173 **** } ! final long tx1 = journal.newTx(); final IIndex ndx1 = journal.getIndex(name,tx1); --- 169,173 ---- } ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); final IIndex ndx1 = journal.getIndex(name,tx1); *************** *** 175,179 **** assertNotNull(ndx1); ! final long tx2 = journal.newTx(); final IIndex ndx2 = journal.getIndex(name,tx2); --- 175,179 ---- assertNotNull(ndx1); ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); final IIndex ndx2 = journal.getIndex(name,tx2); *************** *** 229,233 **** } ! final long tx1 = journal.newTx(); { --- 229,233 ---- } ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); { *************** *** 275,279 **** } ! final long tx2 = journal.newTx(); { --- 275,279 ---- } ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); { *************** *** 337,343 **** */ ! final long tx1 = journal.newTx(); ! final long tx2 = journal.newTx(); { --- 337,343 ---- */ ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); { *************** *** 447,453 **** */ ! final long tx0 = journal.newTx(); ! final long tx1 = journal.newTx(); /* --- 447,453 ---- */ ! final long tx0 = journal.newTx(IsolationEnum.ReadWrite); ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); /* *************** *** 536,542 **** */ ! final long tx0 = journal.newTx(); ! final long tx1 = journal.newTx(); /* --- 536,542 ---- */ ! final long tx0 = journal.newTx(IsolationEnum.ReadWrite); ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); /* *************** *** 643,649 **** */ ! final long tx0 = journal.newTx(); ! final long tx1 = journal.newTx(); /* --- 643,649 ---- */ ! final long tx0 = journal.newTx(IsolationEnum.ReadWrite); ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); /* *************** *** 1170,1180 **** * change will not be visible in this scope. */ ! final long tx0 = journal.newTx(); // transaction on which we write and later commit. ! final long tx1 = journal.newTx(); // new transaction - commit will not be visible in this scope. ! final long tx2 = journal.newTx(); final byte[] id1 = new byte[]{1}; --- 1170,1180 ---- * change will not be visible in this scope. */ ! final long tx0 = journal.newTx(IsolationEnum.ReadWrite); // transaction on which we write and later commit. ! final long tx1 = journal.newTx(IsolationEnum.ReadWrite); // new transaction - commit will not be visible in this scope. ! final long tx2 = journal.newTx(IsolationEnum.ReadWrite); final byte[] id1 = new byte[]{1}; *************** *** 1204,1208 **** // new transaction - commit is visible in this scope. ! final long tx3 = journal.newTx(); // data version still not visible in tx0. --- 1204,1208 ---- // new transaction - commit is visible in this scope. ! final long tx3 = journal.newTx(IsolationEnum.ReadWrite); // data version still not visible in tx0. *************** *** 1271,1275 **** // start transaction. ! final long tx0 = journal.newTx(); // data version visible in the transaction. --- 1271,1275 ---- // start transaction. ! final long tx0 = journal.newTx(IsolationEnum.ReadWrite); // data version visible in the transaction. Index: StressTestConcurrent.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/StressTestConcurrent.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** StressTestConcurrent.java 12 Mar 2007 18:06:12 -0000 1.8 --- StressTestConcurrent.java 15 Mar 2007 16:11:09 -0000 1.9 *************** *** 279,283 **** public Long call() throws Exception { ! final long tx = journal.newTx(); final IIndex ndx = journal.getIndex(name,tx); --- 279,283 ---- public Long call() throws Exception { ! final long tx = journal.newTx(IsolationEnum.ReadWrite); final IIndex ndx = journal.getIndex(name,tx); *************** *** 367,371 **** * However, there is still going to be something that was causing * those transactions and their stores to be hanging around -- perhaps ! * the commitService in the Journal which might be holding onto * {@link Future}s? * --- 367,371 ---- * However, there is still going to be something that was causing * those transactions and their stores to be hanging around -- perhaps ! * the writeService in the Journal which might be holding onto * {@link Future}s? * Index: AbstractTestTxRunState.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractTestTxRunState.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** AbstractTestTxRunState.java 28 Feb 2007 13:59:09 -0000 1.1 --- AbstractTestTxRunState.java 15 Mar 2007 16:11:09 -0000 1.2 *************** *** 180,184 **** * been executed. */ ! journal.commitService.shutdown(); assertFalse(tx0.isActive()); --- 180,184 ---- * been executed. */ ! journal.writeService.shutdown(); assertFalse(tx0.isActive()); *************** *** 576,580 **** // } // ! // ITx tx0 = journal.newTx(); // // IIndex ndx = tx0.getIndex(name); --- 576,580 ---- // } // ! // ITx tx0 = journal.newTx(IsolationEnum.ReadWrite); // // IIndex ndx = tx0.getIndex(name); *************** *** 660,664 **** // } // ! // ITx tx0 = journal.newTx(); // // IIndex ndx = tx0.getIndex(name); --- 660,664 ---- // } // ! // ITx tx0 = journal.newTx(IsolationEnum.ReadWrite); // // IIndex ndx = tx0.getIndex(name); |
From: Bryan T. <tho...@us...> - 2007-03-15 16:11:46
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/test/com/bigdata/objndx Modified Files: TestIndexSegmentWithBloomFilter.java TestIndexSegmentBuilderWithLargeTrees.java TestAll.java Added Files: TestReopen.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... --- NEW FILE: TestReopen.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Nov 17, 2006 */ package com.bigdata.objndx; import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; /** * Unit tests for the close/reopen protocol designed to manage the resource * burden of indices without invalidating the index objects (indices opens can * be reopened as long as their backing store remains available). * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class TestReopen extends AbstractBTreeTestCase { /** * */ public TestReopen() { } /** * @param name */ public TestReopen(String name) { super(name); } /** * Test close on a new tree - should force the root to the store since a new * root is dirty (if empty). reopen should then reload the empty root and on * life goes. */ public void test_reopen01() { BTree btree = getBTree(3); assertTrue(btree.isOpen()); btree.close(); assertFalse(btree.isOpen()); try { btree.close(); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } assertNotNull(btree.getRoot()); assertTrue(btree.isOpen()); } /** * Stress test comparison with ground truth btree when {@link BTree#close()} * is randomly invoked during mutation operations. */ public void test_reopen02() { IRawStore store = new SimpleMemoryRawStore(); /* * The btree under test. * * Note: the fixture factory is NOT used since this node evictions will * be forced when this tree is closed (node evictions are not permitted * by the default fixture factory). */ BTree btree = new BTree(store, 3, SimpleEntry.Serializer.INSTANCE); /* * The btree used to maintain ground truth. * * Note: the fixture factory is NOT used here since the stress test will * eventually overflow the hard reference queue and begin evicting nodes * and leaves onto the store. */ BTree groundTruth = new BTree(store, 3, SimpleEntry.Serializer.INSTANCE); final int limit = 10000; final int keylen = 6; for (int i = 0; i < limit; i++) { int n = r.nextInt(100); if (n < 5) { if(btree.isOpen()) btree.close(); } else if (n < 20) { byte[] key = new byte[keylen]; r.nextBytes(key); btree.remove(key); groundTruth.remove(key); } else { byte[] key = new byte[keylen]; r.nextBytes(key); SimpleEntry value = new SimpleEntry(i); btree.insert(key, value); groundTruth.insert(key, value); } } assertSameBTree(groundTruth, btree); } } Index: TestIndexSegmentBuilderWithLargeTrees.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestIndexSegmentBuilderWithLargeTrees.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** TestIndexSegmentBuilderWithLargeTrees.java 8 Mar 2007 18:14:05 -0000 1.11 --- TestIndexSegmentBuilderWithLargeTrees.java 15 Mar 2007 16:11:09 -0000 1.12 *************** *** 52,56 **** import java.util.Properties; - import com.bigdata.cache.HardReferenceQueue; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; --- 52,55 ---- *************** *** 61,72 **** * and then compares the trees for the same total ordering. * - * @todo do a variant that only evicts index segments when the journal overflows - * and that merges segments as we go. While we can limit the size of the - * journal to provoke overflow, for this test will be unable to compare to - * the data in a single btree. Instead we have to compare against ground - * truth in either a generated or external dataset. This will also test - * the logic that directs queries and scans to the appropriate segment of - * a segmented index. - * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ --- 60,63 ---- *************** *** 210,214 **** doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m ) ); ! // @todo overflows the initial journal extent. // doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m*m ) ); --- 201,205 ---- doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m ) ); ! // Note: overflows the initial journal extent. // doBuildIndexSegmentAndCompare( doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m*m ) ); *************** *** 239,243 **** doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m*m,trace) ); ! //@todo overflows the initial journal extent. // doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m*m*m,trace) ); --- 230,234 ---- doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m*m,trace) ); ! // Note: overflows the initial journal extent. // doBuildIndexSegmentAndCompare( doInsertRandomSparseKeySequenceTest(m,m*m*m*m,trace) ); *************** *** 277,293 **** + "), out(m=" + m + ")"); ! IndexSegmentBuilder builder = new IndexSegmentBuilder(outFile, ! tmpDir, btree, m, 0. ! /* ! * @todo pass in btree.nodeSer.valueSerializer ! */ ! ); /* ! * Verify can load the index file and that the metadata ! * associated with the index file is correct (we are only ! * checking those aspects that are easily defined by the test ! * case and not, for example, those aspects that depend on the ! * specifics of the length of serialized nodes or leaves). */ System.err.println("Opening index segment."); --- 268,279 ---- + "), out(m=" + m + ")"); ! new IndexSegmentBuilder(outFile, tmpDir, btree, m, 0.); /* ! * Verify can load the index file and that the metadata associated ! * with the index file is correct (we are only checking those ! * aspects that are easily defined by the test case and not, for ! * example, those aspects that depend on the specifics of the length ! * of serialized nodes or leaves). */ System.err.println("Opening index segment."); *************** *** 301,304 **** --- 287,302 ---- System.err.println("Closing index segment."); seg.close(); + + /* + * Note: close() is a reversable operation. This verifies that by + * immediately re-verifying the index segment. The index segment + * SHOULD be transparently re-opened for this operation. + */ + System.err.println("Re-verifying index segment."); + assertSameBTree(btree, seg); + + // Close again so that we can delete the backing file. + System.err.println("Re-closing index segment."); + seg.close(); if (!outFile.delete()) { Index: TestIndexSegmentWithBloomFilter.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestIndexSegmentWithBloomFilter.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestIndexSegmentWithBloomFilter.java 8 Mar 2007 18:14:05 -0000 1.9 --- TestIndexSegmentWithBloomFilter.java 15 Mar 2007 16:11:09 -0000 1.10 *************** *** 58,62 **** import com.bigdata.journal.Journal; import com.bigdata.journal.Options; - import com.bigdata.rawstore.Bytes; /** --- 58,61 ---- *************** *** 284,291 **** System.err.println("Opening index segment w/ bloom filter."); final IndexSegment seg2 = new IndexSegmentFileStore(outFile2).load(); - // // setup reference queue. - // new HardReferenceQueue<PO>(new DefaultEvictionListener(), - // 100, 20), - // @todo btree.nodeSer.valueSerializer); /* --- 283,286 ---- *************** *** 379,386 **** System.err.println("Opening index segment w/o bloom filter."); final IndexSegment seg = new IndexSegmentFileStore(outFile).load(); - // // setup reference queue. - // new HardReferenceQueue<PO>(new DefaultEvictionListener(), - // 100, 20), - // @todo btree.nodeSer.valueSerializer); /* --- 374,377 ---- *************** *** 393,400 **** System.err.println("Opening index segment w/ bloom filter."); final IndexSegment seg2 = new IndexSegmentFileStore(outFile2).load(); - // // setup reference queue. - // new HardReferenceQueue<PO>(new DefaultEvictionListener(), - // 100, 20), - // @todo btree.nodeSer.valueSerializer); /* --- 384,387 ---- *************** *** 426,429 **** --- 413,417 ---- assertSameBTree(btree, seg); assertSameBTree(btree, seg2); + seg2.close(); // close seg w/ bloom filter and the verify with implicit reopen. assertSameBTree(seg, seg2); Index: TestAll.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestAll.java,v retrieving revision 1.31 retrieving revision 1.32 diff -C2 -d -r1.31 -r1.32 *** TestAll.java 8 Mar 2007 18:14:05 -0000 1.31 --- TestAll.java 15 Mar 2007 16:11:10 -0000 1.32 *************** *** 112,115 **** --- 112,117 ---- // verify that a store is restart-safe iff it commits. suite.addTestSuite( TestRestartSafe.class ); + // test the close/reopen protocol for releasing index buffers. + suite.addTestSuite( TestReopen.class ); /* *************** *** 132,136 **** /* ! * use of btree to support column store. * * @todo handle column names and timestamp as part of the key. --- 134,138 ---- /* ! * @todo use of btree to support column store (in another package) * * @todo handle column names and timestamp as part of the key. |
From: Bryan T. <tho...@us...> - 2007-03-15 16:11:46
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/cache In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/java/com/bigdata/cache Modified Files: ICacheEntry.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... Index: ICacheEntry.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/cache/ICacheEntry.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ICacheEntry.java 8 Nov 2006 19:20:47 -0000 1.2 --- ICacheEntry.java 15 Mar 2007 16:11:12 -0000 1.3 *************** *** 56,61 **** * @version $Id$ * - * @todo long oid to Object (see {@link ICachePolicy}. - * * @todo Support clearing updated objects for hot cache between transactions. * Add metadata boolean that indicates whether the object was modified --- 56,59 ---- |
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/java/com/bigdata/journal Modified Files: AbstractBufferStrategy.java ITransactionManager.java Name2Addr.java IJournal.java CommitRecord.java Tx.java ITx.java IAtomicStore.java AbstractTx.java TemporaryRawStore.java ReadCommittedTx.java Journal.java Added Files: ITxCommitProtocol.java ResourceManager.java AbstractJournal.java Removed Files: TransactionServer.java JournalServer.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... --- NEW FILE: AbstractJournal.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations [...1967 lines suppressed...] if (name == null) { throw new IllegalArgumentException(); } ITx tx = activeTx.get(ts); if (tx == null) { throw new IllegalStateException(); } return tx.getIndex(name); } } Index: IJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IJournal.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** IJournal.java 11 Mar 2007 11:42:45 -0000 1.9 --- IJournal.java 15 Mar 2007 16:11:12 -0000 1.10 *************** *** 62,67 **** * @version $Id$ */ ! public interface IJournal extends IMROW, IAtomicStore, IIndexManager, ! ITransactionManager { /** --- 62,66 ---- * @version $Id$ */ ! public interface IJournal extends IMROW, IAtomicStore, IIndexManager { /** Index: CommitRecord.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/CommitRecord.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** CommitRecord.java 12 Mar 2007 18:06:11 -0000 1.4 --- CommitRecord.java 15 Mar 2007 16:11:12 -0000 1.5 *************** *** 63,67 **** /** * @todo this may not be the correct commit counter unless this method is ! * synchronized with the commitService. * * @todo are commit counters global or local? --- 63,67 ---- /** * @todo this may not be the correct commit counter unless this method is ! * synchronized with the writeService. * * @todo are commit counters global or local? Index: Name2Addr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Name2Addr.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** Name2Addr.java 12 Mar 2007 18:06:11 -0000 1.6 --- Name2Addr.java 15 Mar 2007 16:11:12 -0000 1.7 *************** *** 181,188 **** // re-load btree from the store. btree = BTree.load(this.store, entry.addr); ! // save name -> btree mapping in transient cache. indexCache.put(name,btree); // return btree. return btree; --- 181,191 ---- // re-load btree from the store. btree = BTree.load(this.store, entry.addr); ! // save name -> btree mapping in transient cache. indexCache.put(name,btree); + // report event (loaded btree). + ResourceManager.openUnisolatedBTree(name); + // return btree. return btree; --- NEW FILE: ITxCommitProtocol.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 15, 2007 */ package com.bigdata.journal; import com.bigdata.service.IDataService; /** * An interface implemented by an {@link IDataService} for the commit / abort of * the local write set for a transaction as directed by a centralized * {@link ITransactionManager} in response to client requests. * <p> * Clients DO NOT make direct calls against this API. Instead, they MUST locate * the {@link ITransactionManager} service and direct messages to that service. * <p> * Note: These methods should be invoked iff the transaction manager knows that * the {@link IDataService} is buffering writes for the transaction. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * FIXME in order to support 2-/3-phase commit, the [commitTime] from the * transaction manager service must be passed through to the journal. There also * needs to be a distinct "prepare" message that validates the write set of the * transaction and makes it restart safe. finally, i have to coordinate the * serialization of the wait for the "commit" message. */ public interface ITxCommitProtocol { /** * Request commit of the transaction write set. */ public void commit(long tx); /** * Request abort of the transaction write set. */ public void abort(long tx); } Index: ITx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITx.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** ITx.java 11 Mar 2007 11:42:44 -0000 1.6 --- ITx.java 15 Mar 2007 16:11:12 -0000 1.7 *************** *** 131,134 **** --- 131,139 ---- /** + * The type-safe isolation level for this transaction. + */ + public IsolationEnum getIsolationLevel(); + + /** * When true, the transaction will reject writes. */ *************** *** 136,139 **** --- 141,149 ---- /** + * When true, the transaction has an empty write set. + */ + public boolean isEmptyWriteSet(); + + /** * A transaction is "active" when it is created and remains active until it * prepares or aborts. An active transaction accepts READ, WRITE, DELETE, Index: TemporaryRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TemporaryRawStore.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TemporaryRawStore.java 6 Mar 2007 20:38:06 -0000 1.3 --- TemporaryRawStore.java 15 Mar 2007 16:11:13 -0000 1.4 *************** *** 111,117 **** /** ! * Create a {@link TemporaryRawStore} with an initial in-memory capacity of 10M ! * that will grow up to 100M before converting into a disk-based store ! * backed by a temporary file. * * @todo the memory growth strategy does not respect the in-memory maximum --- 111,122 ---- /** ! * Create a {@link TemporaryRawStore} with an initial in-memory capacity of ! * 10M that will grow up to 100M before converting into a disk-based store ! * backed by a temporary file. These defaults are appropriate for a ! * relatively small number of processes that will write a lot of data. If ! * you have a lot of processes then you need to be more conservative with ! * RAM in the initial allocation and switch over to disk sooner. For ! * example, transactions use smaller defaults in order to support a large ! * #of concurrent transactions without a large memory burden. * * @todo the memory growth strategy does not respect the in-memory maximum *************** *** 170,177 **** open = false; - // buf.close(); - // - // buf.deleteFile(); - buf.closeAndDelete(); --- 175,178 ---- *************** *** 241,244 **** --- 242,249 ---- try { + /* + * Note: this operation will transparently extend the in-memory + * buffer as necessary up to the specified maximum capacity. + */ return buf.write(data); Index: ITransactionManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITransactionManager.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** ITransactionManager.java 11 Mar 2007 11:42:45 -0000 1.4 --- ITransactionManager.java 15 Mar 2007 16:11:12 -0000 1.5 *************** *** 1,45 **** /** ! The Notice below must appear in each file of the Source Code of any ! copy you distribute of the Licensed Product. Contributors to any ! Modifications may add their own copyright notices to identify their ! own contributions. ! License: ! The contents of this file are subject to the CognitiveWeb Open Source ! License Version 1.1 (the License). You may not copy or use this file, ! in either source code or executable form, except in compliance with ! the License. You may obtain a copy of the License from ! http://www.CognitiveWeb.org/legal/license/ ! Software distributed under the License is distributed on an AS IS ! basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See ! the License for the specific language governing rights and limitations ! under the License. ! Copyrights: ! Portions created by or assigned to CognitiveWeb are Copyright ! (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact ! information for CognitiveWeb is available at ! http://www.CognitiveWeb.org ! Portions Copyright (c) 2002-2003 Bryan Thompson. ! Acknowledgements: ! Special thanks to the developers of the Jabber Open Source License 1.0 ! (JOSL), from which this License was derived. This License contains ! terms that differ from JOSL. ! Special thanks to the CognitiveWeb Open Source Contributors for their ! suggestions and support of the Cognitive Web. ! Modifications: ! */ /* * Created on Feb 19, 2007 --- 1,45 ---- /** ! The Notice below must appear in each file of the Source Code of any ! copy you distribute of the Licensed Product. Contributors to any ! Modifications may add their own copyright notices to identify their ! own contributions. ! License: ! The contents of this file are subject to the CognitiveWeb Open Source ! License Version 1.1 (the License). You may not copy or use this file, ! in either source code or executable form, except in compliance with ! the License. You may obtain a copy of the License from ! http://www.CognitiveWeb.org/legal/license/ ! Software distributed under the License is distributed on an AS IS ! basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See ! the License for the specific language governing rights and limitations ! under the License. ! Copyrights: ! Portions created by or assigned to CognitiveWeb are Copyright ! (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact ! information for CognitiveWeb is available at ! http://www.CognitiveWeb.org ! Portions Copyright (c) 2002-2003 Bryan Thompson. ! Acknowledgements: ! Special thanks to the developers of the Jabber Open Source License 1.0 ! (JOSL), from which this License was derived. This License contains ! terms that differ from JOSL. ! Special thanks to the CognitiveWeb Open Source Contributors for their ! suggestions and support of the Cognitive Web. ! Modifications: ! */ /* * Created on Feb 19, 2007 *************** *** 50,54 **** import com.bigdata.isolation.IConflictResolver; import com.bigdata.isolation.UnisolatedBTree; - import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IndexSegment; --- 50,53 ---- *************** *** 62,73 **** /** - * Create a new fully-isolated read-write transaction. - * - * @return The transaction start time, which serves as the unique identifier - * for the transaction. - */ - public long newTx(); - - /** * Create a new transaction. * <p> --- 61,64 ---- *************** *** 107,135 **** public long newTx(IsolationEnum level); - // /** - // * Create a new fully-isolated transaction. - // * - // * @param readOnly - // * When true, the transaction will reject writes. - // * - // * @return The transaction start time, which serves as the unique identifier - // * for the transaction. - // */ - // public long newTx(boolean readOnly); - // - // /** - // * Create a new read-committed transaction. The transaction will reject - // * writes. Any data committed by concurrent transactions will become visible - // * to indices isolated by this transaction (hence, "read comitted"). - // * <p> - // * This provides more isolation than "read dirty" since the concurrent - // * transactions MUST commit before their writes become visible to the a - // * read-committed transaction. - // * - // * @return The transaction start time, which serves as the unique identifier - // * for the transaction. - // */ - // public long newReadCommittedTx(); - /** * Abort the transaction. --- 98,101 ---- *************** *** 150,154 **** * The transaction start time, which serves as the unique * identifier for the transaction. ! * * @return The commit timestamp assigned to the transaction. * --- 116,120 ---- * The transaction start time, which serves as the unique * identifier for the transaction. ! * * @return The commit timestamp assigned to the transaction. * *************** *** 156,160 **** * if there is no active transaction with that timestamp. */ ! public long commit(long startTime); } --- 122,126 ---- * if there is no active transaction with that timestamp. */ ! public long commit(long startTime) throws ValidationError; } --- TransactionServer.java DELETED --- Index: Tx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Tx.java,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -d -r1.36 -r1.37 *** Tx.java 12 Mar 2007 18:06:11 -0000 1.36 --- Tx.java 15 Mar 2007 16:11:12 -0000 1.37 *************** *** 92,100 **** * @version $Id$ * ! * @todo track whether or not the transaction has written any isolated data. do ! * this at the same time that I modify the isolated indices use a * delegation strategy so that I can trap attempts to access an isolated ! * index once the transaction is no longer active. define "active" as ! * up to the point where a "commit" or "abort" is _requested_ for the tx. * * @todo Support transactions where the indices isolated by the transactions are --- 92,111 ---- * @version $Id$ * ! * @todo In order to support a distributed transaction commit protocol the write ! * set of a validated transaction needs to be made restart safe without ! * making it restart safe on the corresponding unisolated index on the ! * journal. It may be that the right thing to do is to write the validated ! * data onto the unisolated indices but not commit the journal and not ! * permit other unisolated writes until the commit message arives, e.g., ! * block in the {@link AbstractJournal#writeService} waiting on the commit ! * message. A timeout would cause the buffered writes to be discarded (by ! * an abort). ! * ! * @todo track whether or not the transaction has written any isolated data (I ! * currently rangeCount the isolated indices in {@link #isEmptyWriteSet()}). ! * do this at the same time that I modify the isolated indices use a * delegation strategy so that I can trap attempts to access an isolated ! * index once the transaction is no longer active. define "active" as up ! * to the point where a "commit" or "abort" is _requested_ for the tx. * * @todo Support transactions where the indices isolated by the transactions are *************** *** 167,173 **** * {@link #prepare()} and {@link #commit()} will be NOPs. */ ! public Tx(Journal journal, long startTime, boolean readOnly) { ! super(journal, startTime, readOnly); /* --- 178,185 ---- * {@link #prepare()} and {@link #commit()} will be NOPs. */ ! public Tx(AbstractJournal journal, long startTime, boolean readOnly) { ! super(journal, startTime, readOnly ? IsolationEnum.ReadOnly ! : IsolationEnum.ReadWrite); /* *************** *** 315,321 **** if (!isActive()) { ! ! throw new IllegalStateException(); ! } --- 327,333 ---- if (!isActive()) { ! ! throw new IllegalStateException(NOT_ACTIVE); ! } *************** *** 360,363 **** --- 372,378 ---- // writeable index backed by the temp. store. index = new IsolatedBTree(tmpStore,src); + + // report event. + ResourceManager.isolateIndex(startTime, name); } *************** *** 371,373 **** --- 386,417 ---- } + final public boolean isEmptyWriteSet() { + + if(isReadOnly()) { + + // Read-only transactions always have empty write sets. + return true; + + } + + Iterator<IIsolatedIndex> itr = btrees.values().iterator(); + + while(itr.hasNext()) { + + IsolatedBTree ndx = (IsolatedBTree) itr.next(); + + if(!ndx.isEmptyWriteSet()) { + + // At least one isolated index was written on. + + return false; + + } + + } + + return true; + + } + } Index: Journal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Journal.java,v retrieving revision 1.61 retrieving revision 1.62 diff -C2 -d -r1.61 -r1.62 *** Journal.java 12 Mar 2007 18:06:11 -0000 1.61 --- Journal.java 15 Mar 2007 16:11:13 -0000 1.62 *************** *** 80,946 **** /** ! * <p> ! * The {@link Journal} is an append-only persistence capable data structure ! * supporting atomic commit, named indices, and transactions. Writes are ! * logically appended to the journal to minimize disk head movement. ! * </p> ! * <p> ! * Commit processing. The journal maintains two root blocks. Commit updates the ! * root blocks using the Challis algorithm. (The root blocks are updated using [...1773 lines suppressed...] - } - - return tx; - - } - - public IIndex getIndex(String name, long ts) { - - if(name == null) throw new IllegalArgumentException(); - - ITx tx = activeTx.get(ts); - - if(tx==null) throw new IllegalStateException(); - - return tx.getIndex(name); - - } } --- 146,149 ---- Index: ReadCommittedTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ReadCommittedTx.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ReadCommittedTx.java 12 Mar 2007 18:06:11 -0000 1.2 --- ReadCommittedTx.java 15 Mar 2007 16:11:13 -0000 1.3 *************** *** 92,98 **** public class ReadCommittedTx extends AbstractTx implements ITx { ! public ReadCommittedTx(Journal journal, long startTime ) { ! super(journal, startTime, true /*readOnly*/); } --- 92,107 ---- public class ReadCommittedTx extends AbstractTx implements ITx { ! public ReadCommittedTx(AbstractJournal journal, long startTime ) { ! super(journal, startTime, IsolationEnum.ReadCommitted); ! ! } ! ! /** ! * The write set is always empty. ! */ ! final public boolean isEmptyWriteSet() { ! ! return true; } *************** *** 106,109 **** --- 115,124 ---- public IIndex getIndex(String name) { + if (!isActive()) { + + throw new IllegalStateException(NOT_ACTIVE); + + } + if (journal.getIndex(name) == null) { --- NEW FILE: ResourceManager.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 13, 2007 */ package com.bigdata.journal; import java.text.NumberFormat; import org.apache.log4j.Level; import org.apache.log4j.Logger; import com.bigdata.objndx.IndexSegment; import com.bigdata.objndx.IndexSegmentBuilder; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MasterJournal; /** * This class is responsible for integrating events that report on resource * consumption and latency and taking actions that may seek to minimize latency * or resource consumption. * <p> * Resource consumption events include * <ol> * <li>mutable unisolated indices open on the journal</li> * <li>mutable isolated indices open in writable transactions</li> * <li>historical read-only indices open on old journals</li> * <li>historical read-only index segments</li> * </ol> * * The latter two classes of event sources exist iff {@link Journal#overflow()} * is handled by creating a new {@link Journal} and evicting data from the old * {@link Journal} asynchronously onto read-optimized {@link IndexSegment}s. * * Other resource consumption events deal directly with transactions * <ol> * <li>open a transaction</li> * <li>close a transaction</li> * <li>a heartbeat for each write operation on a transaction is used to update * the resource consumption of the store</li> * </ol> * * <p> * Latency events include * <ol> * <li>request latency, that is, the time that a request waits on a queue * before being serviced</li> * <li>transactions per second</li> * </ol> * * @todo report the partition identifier as part of the index segment events. * * @todo use a hard reference queue to track recently used AbstractBTrees. a a * public referenceCount field on AbstractBTree and close the * AbstractBTree on eviction from the hard reference queue iff the * referenceCount is zero (no references to that AbstractBTree remain on * the hard reference queue). * * @todo consider handling close out of index partitions "whole at once" to * include all index segments in the current view of that partition. this * probably does not matter might be a nicer level of aggregation than the * individual index segment. * * @todo this still does not suggest a mechanism for close by timeout. one * solutions is to just close down all open indices if the server quieses. * if the server is not quiesent then unused indices will get shutdown in * any case. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class ResourceManager { /** * Logger. * * @todo change the logger configuration to write on a JMS queue or JINI * discovered service in order to aggregate results from multiple * hosts in a scale-out solution. * * @todo a scale-out solution will need to report the data service identity * with each event so that we can model load by data service and host. * * @todo actions taken based on this information must be directed to the * appropriate data service. */ public static final Logger log = Logger.getLogger(ResourceManager.class); /** * True iff the {@link #log} level is INFO or less. */ final public static boolean INFO = log.getEffectiveLevel().toInt() <= Level.INFO .toInt(); static NumberFormat cf; static NumberFormat fpf; static { cf = NumberFormat.getNumberInstance(); cf.setGroupingUsed(true); fpf = NumberFormat.getNumberInstance(); fpf.setGroupingUsed(false); fpf.setMaximumFractionDigits(2); } // /* // * Unisolated index reporting. // */ // private Map<String/*name*/,Counters> unisolated = new ConcurrentHashMap<String, Counters>(); /** * Report opening of a mutable unisolated named index on an {@link IJournal}. * * @param name * The index name. */ static public void openUnisolatedBTree(String name) { if(INFO) log.info("name="+name); } /** * Report closing of a mutable unisolated named index on an {@link IJournal}. * * @param name * The index name. * * @todo never invoked since we do not explicitly close out indices and are * not really able to differentiate the nature of the index when it is * finalized (unisolated vs isolated vs index segment can be * identified based on their interfaces). */ static public void closeUnisolatedBTree(String name) { if(INFO) log.info("name="+name); } /** * Report drop of a named unisolated index. * * @param name * The index name. */ static public void dropUnisolatedBTree(String name) { if(INFO) log.info("name="+name); } /* * Index segment reporting. */ /** * Report that an {@link IndexSegment} has been opened. * * @param name * The index name or null if this is not a named index. * @param filename * The name of the file containing the {@link IndexSegment}. * @param nbytes * The size of that file in bytes. * * @todo memory burden depends on the buffered data (nodes or nodes + * leaves) * * @todo the index name is not being reported since it is not part of the * extension metadata record at this time. this means that we can not * aggregate events for index segments for a given named index at this * time. */ static public void openIndexSegment(String name, String filename, long nbytes) { if(INFO) log.info("name="+name+", filename="+filename+", #bytes="+nbytes); } /** * Report that an {@link IndexSegment} has been closed. * * @param filename * * @todo we do not close out index segments based on non-use (e.g., timeout * or LRU). */ static public void closeIndexSegment(String filename) { if(INFO) log.info("filename="+filename); } /** * Report on a bulk merge/build of an {@link IndexSegment}. * * @param name * The index name or null if this is not a named index. * @param filename * The name of the file on which the index segment was written. * @param nentries * The #of entries in the {@link IndexSegment}. * @param elapsed * The elapsed time of the operation that built the index segment * (merge + build). * @param nbytes * The #of bytes in the {@link IndexSegment}. * * @todo the event is reported from {@link IndexSegmentBuilder} and does not * report the index name and does not account for resources * (time/space) required by the merge aspect of a bulk build. */ static public void buildIndexSegment(String name, String filename, int nentries, long elapsed, long nbytes) { if (INFO) { // data rate in MB/sec. float mbPerSec = (elapsed == 0 ? 0 : nbytes / Bytes.megabyte32 / (elapsed / 1000f)); log.info("name=" + name + ", filename=" + filename + ", nentries=" + nentries + ", elapsed=" + elapsed + ", " + fpf.format(((double) nbytes / Bytes.megabyte32)) + "MB" + ", rate=" + fpf.format(mbPerSec) + "MB/sec"); } } /* * Transaction reporting. * * @todo the clock time for a distributed transaction can be quite different * from the time that a given transaction was actually open on a given data * service. the former is simply [commitTime - startTime] while the latter * depends on the clock times at which the transaction was opened and closed * on the data service. */ /** * Report the start of a new transaction. * * @param startTime * Both the transaction identifier and its global start time. * @param level * The isolation level of the transaction. */ static public void openTx(long startTime, IsolationEnum level) { if(INFO) log.info("tx="+startTime+", level="+level); } /** * Report completion of a transaction. * * @param startTime * The transaction identifier. * @param commitTime * The commit timestamp (non-zero iff this was a writable * transaction that committed successfully and zero otherwise). * @param aborted * True iff the transaction aborted vs completing successfully. */ static public void closeTx(long startTime, long commitTime, boolean aborted) { if(INFO) log.info("tx=" + startTime + ", commitTime=" + commitTime + ", aborted=" + aborted + ", elapsed=" + (commitTime - startTime)); } /** * Report the extension of the {@link TemporaryRawStore} associated with a * transaction and whether or not it has spilled onto the disk. * * @param startTime * The transaction identifier. * @param nbytes * The #of bytes written on the {@link TemporaryRawStore} for * that transaction. * @param onDisk * True iff the {@link TemporaryRawStore} has spilled over to * disk for the transaction. * * @todo event is not reported. */ static public void extendTx(long startTime, long nbytes, boolean onDisk) { } /** * Report the isolation of a named index by a transaction. * * @param startTime * The transaction identifier. * @param name * The index name. */ static public void isolateIndex(long startTime, String name) { if(INFO) log.info("tx="+startTime+", name="+name); /* * Note: there is no separate close for isolated indices - they are * closed when the transaction commits or aborts. read-write indices can * not be closed before the transactions completes, but read-only * indices can be closed early and reopened as required. read-committed * indices are always changing over to the most current committed state * for an index. both read-only and read-committed indices MAY be shared * by more than one transaction (@todo verify that the protocol for * sharing is in place on the journal). */ } /* * Journal file reporting. */ /** * Report the opening of an {@link IJournal} resource. * * @param filename * The filename or null iff the journal was not backed by a file. * @param nbytes * The total #of bytes available on the journal. * @param bufferMode * The buffer mode in use by the journal. */ static public void openJournal(String filename, long nbytes, BufferMode bufferMode) { if(INFO) log.info("filename="+filename+", #bytes="+nbytes+", mode="+bufferMode); } /** * Report the extension of an {@link IJournal}. * * @param filename * The filename or null iff the journal was not backed by a file. * @param nbytes * The total #of bytes available (vs written) on the journal. * * @todo this does not differentiate between extension of a buffer backing a * journal and extension of a {@link TemporaryRawStore}. This means * that the resources allocated to a transaction vs the unisolated * indices on a journal can not be differentiated. */ static public void extendJournal(String filename, long nbytes) { if(INFO) log.info("filename="+filename+", #bytes="+nbytes); } /** * Report an overflow event. * * @param filename * The filename or null iff the journal was not backed by a file. * @param nbytes * The total #of bytes written on the journal. */ static public void overflowJournal(String filename, long nbytes) { if(INFO) log.info("filename="+filename+", #bytes="+nbytes); } /** * Report close of an {@link IJournal} resource. * * @param filename * The filename or null iff the journal was not backed by a file. */ static public void closeJournal(String filename) { if(INFO) log.info("filename="+filename); } /** * Report deletion of an {@link IJournal} resource. * * @param filename * The filename or null iff the journal was not backed by a file. * * @todo also report deletion of resources for journals that were already * closed but not yet deleted pending client leases or updates of the * metadata index (in the {@link MasterJournal}). */ static public void deleteJournal(String filename) { if(INFO) log.info("filename="+filename); } } Index: AbstractTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractTx.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** AbstractTx.java 11 Mar 2007 11:42:45 -0000 1.3 --- AbstractTx.java 15 Mar 2007 16:11:13 -0000 1.4 *************** *** 69,73 **** * commit protocol and to locate the named indices that it isolates. */ ! final protected Journal journal; /** --- 69,73 ---- * commit protocol and to locate the named indices that it isolates. */ ! final protected AbstractJournal journal; /** *************** *** 97,103 **** final protected boolean readOnly; private RunState runState; ! protected AbstractTx(Journal journal, long startTime, boolean readOnly ) { if (journal == null) --- 97,109 ---- final protected boolean readOnly; + /** + * The type-safe enumeration representing the isolation level of this + * transaction. + */ + final protected IsolationEnum level; + private RunState runState; ! protected AbstractTx(AbstractJournal journal, long startTime, IsolationEnum level ) { if (journal == null) *************** *** 110,115 **** this.startTime = startTime; ! this.readOnly = readOnly; // pre-compute the hash code for the transaction. this.hashCode = Long.valueOf(startTime).hashCode(); --- 116,123 ---- this.startTime = startTime; ! this.readOnly = level != IsolationEnum.ReadWrite;; + this.level = level; + // pre-compute the hash code for the transaction. this.hashCode = Long.valueOf(startTime).hashCode(); *************** *** 119,122 **** --- 127,133 ---- this.runState = RunState.Active; + // report event. + ResourceManager.openTx(startTime, level); + } *************** *** 178,182 **** } ! final public boolean isReadOnly() { --- 189,199 ---- } ! ! final public IsolationEnum getIsolationLevel() { ! ! return level; ! ! } ! final public boolean isReadOnly() { *************** *** 226,229 **** --- 243,248 ---- journal.completedTx(this); + ResourceManager.closeTx(startTime, commitTime, true); + } finally { *************** *** 344,347 **** --- 363,368 ---- journal.completedTx(this); + ResourceManager.closeTx(startTime, commitTime, false); + } catch( Throwable t) { Index: IAtomicStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IAtomicStore.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** IAtomicStore.java 12 Mar 2007 18:06:11 -0000 1.6 --- IAtomicStore.java 15 Mar 2007 16:11:13 -0000 1.7 *************** *** 61,70 **** /** ! * Abandon the current write set. */ public void abort(); /** ! * Request an atomic commit. * * @return The timestamp assigned to the {@link ICommitRecord} -or- 0L if --- 61,70 ---- /** ! * Abandon the current write set (immediate, synchronous). */ public void abort(); /** ! * Atomic commit (immediate, synchronous). * * @return The timestamp assigned to the {@link ICommitRecord} -or- 0L if --- JournalServer.java DELETED --- Index: AbstractBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** AbstractBufferStrategy.java 6 Mar 2007 20:38:06 -0000 1.12 --- AbstractBufferStrategy.java 15 Mar 2007 16:11:12 -0000 1.13 *************** *** 159,162 **** --- 159,166 ---- */ truncate( newExtent ); + + // report event. + ResourceManager.extendJournal(getFile() == null ? null : getFile() + .toString(), newExtent); // Retry the write operation. |
From: Bryan T. <tho...@us...> - 2007-03-15 16:11:40
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/java/com/bigdata/isolation Modified Files: UnisolatedIndexSegment.java IsolatedBTree.java UnisolatedBTree.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... Index: UnisolatedIndexSegment.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/UnisolatedIndexSegment.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** UnisolatedIndexSegment.java 8 Mar 2007 18:14:06 -0000 1.1 --- UnisolatedIndexSegment.java 15 Mar 2007 16:11:13 -0000 1.2 *************** *** 198,202 **** public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { ! return root.rangeIterator(fromKey, toKey, DeletedEntryFilter.INSTANCE); } --- 198,202 ---- public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { ! return getRoot().rangeIterator(fromKey, toKey, DeletedEntryFilter.INSTANCE); } *************** *** 204,208 **** public IEntryIterator entryIterator() { ! return root.rangeIterator(null, null, DeletedEntryFilter.INSTANCE); } --- 204,208 ---- public IEntryIterator entryIterator() { ! return rangeIterator(null, null); } Index: UnisolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/UnisolatedBTree.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** UnisolatedBTree.java 8 Mar 2007 18:14:06 -0000 1.8 --- UnisolatedBTree.java 15 Mar 2007 16:11:13 -0000 1.9 *************** *** 434,438 **** public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { ! return root.rangeIterator(fromKey, toKey, DeletedEntryFilter.INSTANCE); } --- 434,438 ---- public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { ! return getRoot().rangeIterator(fromKey, toKey, DeletedEntryFilter.INSTANCE); } *************** *** 440,444 **** public IEntryIterator entryIterator() { ! return root.rangeIterator(null, null, DeletedEntryFilter.INSTANCE); } --- 440,444 ---- public IEntryIterator entryIterator() { ! return rangeIterator(null, null); } Index: IsolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IsolatedBTree.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** IsolatedBTree.java 11 Mar 2007 11:42:44 -0000 1.8 --- IsolatedBTree.java 15 Mar 2007 16:11:13 -0000 1.9 *************** *** 179,182 **** --- 179,191 ---- /** + * True iff there are no writes on this isolated index. + */ + public boolean isEmptyWriteSet() { + + return super.nentries == 0; + + } + + /** * If the key is not in the write set, then delegate to * {@link UnisolatedBTree#contains(byte[])} on the isolated index. If the *************** *** 440,444 **** * objects and see both deleted and undeleted entries. */ ! final IEntryIterator itr = root.rangeIterator(null, null, null); while (itr.hasNext()) { --- 449,453 ---- * objects and see both deleted and undeleted entries. */ ! final IEntryIterator itr = getRoot().rangeIterator(null, null, null); while (itr.hasNext()) { *************** *** 560,564 **** * objects and see both deleted and undeleted entries. */ ! final IEntryIterator itr = root.rangeIterator(null, null, null); while (itr.hasNext()) { --- 569,573 ---- * objects and see both deleted and undeleted entries. */ ! final IEntryIterator itr = getRoot().rangeIterator(null, null, null); while (itr.hasNext()) { |
From: Bryan T. <tho...@us...> - 2007-03-15 16:11:40
|
Update of /cvsroot/cweb/bigdata/src/architecture In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/architecture Added Files: bigdata.emx Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... --- NEW FILE: bigdata.emx --- <?xml version="1.0" encoding="UTF-8"?> <!--xtools2_universal_type_manager--> <uml:Model xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:notation="http://www.ibm.com/xtools/1.5.0/Notation" xmlns:uml="http://www.eclipse.org/uml2/1.0.0/UML" xmlns:umlnotation="http://www.ibm.com/xtools/1.5.0/Umlnotation" xmi:id="_EWlSMIqqEduD57FOfu5USw" name="architecture" appliedProfile="_EXeDAIqqEduD57FOfu5USw _EYEf8IqqEduD57FOfu5USw _EYhL4IqqEduD57FOfu5USw _EZtesIqqEduD57FOfu5USw _Ecr6MIqqEduD57FOfu5USw"> <eAnnotations xmi:id="_EWlSMYqqEduD57FOfu5USw" source="uml2.diagrams" references="_MouEoM2fEdutrbzDPceyvA"> <contents xmi:type="notation:Diagram" xmi:id="_MouEoM2fEdutrbzDPceyvA" type="Class" name="Main"> <children xmi:id="_OQR4UM2fEdutrbzDPceyvA" element="_FjcUwM2DEduTE5xAot6nFA"> <children xmi:id="_OQR4U82fEdutrbzDPceyvA" type="ImageCompartment" element="_FjcUwM2DEduTE5xAot6nFA"> <layoutConstraint xmi:type="notation:Size" xmi:id="_OQR4VM2fEdutrbzDPceyvA" width="1320" height="1320"/> </children> <children xmi:id="_OQR4Vc2fEdutrbzDPceyvA" type="Stereotype" element="_FjcUwM2DEduTE5xAot6nFA"/> <children xmi:id="_OQR4Vs2fEdutrbzDPceyvA" type="Name" element="_FjcUwM2DEduTE5xAot6nFA"/> <styles xmi:type="umlnotation:UMLShapeStyle" xmi:id="_OQR4Uc2fEdutrbzDPceyvA" showStereotype="Label"/> <layoutConstraint xmi:type="notation:Bounds" xmi:id="_OQR4Us2fEdutrbzDPceyvA" x="6996" y="4452"/> </children> <children xmi:id="_OQbpUM2fEdutrbzDPceyvA" element="_Artw0M2DEduTE5xAot6nFA"> <children xmi:id="_OQbpU82fEdutrbzDPceyvA" type="ImageCompartment" element="_Artw0M2DEduTE5xAot6nFA"> <layoutConstraint xmi:type="notation:Size" xmi:id="_OQbpVM2fEdutrbzDPceyvA" width="1320" height="1320"/> </children> <children xmi:id="_OQbpVc2fEdutrbzDPceyvA" type="Stereotype" element="_Artw0M2DEduTE5xAot6nFA"/> [...8677 lines suppressed...] </ownedMember> <ownedMember xmi:type="uml:Interface" xmi:id="_6BoZ8NFdEdubB7ExQlt35g" name="IWeakRefCacheEntry"> <ownedOperation xmi:id="_8NGzANFdEdubB7ExQlt35g" name="getKey" type="_Co8PMYqrEduD57FOfu5USw"> <returnResult xmi:id="_8_BYINFdEdubB7ExQlt35g" name="ReturnResult" type="_Co8PMYqrEduD57FOfu5USw" direction="return"/> </ownedOperation> <ownedOperation xmi:id="_9AD58NFdEdubB7ExQlt35g" name="getValue" type="_Co8PMYqrEduD57FOfu5USw"> <returnResult xmi:id="_9y328NFdEdubB7ExQlt35g" name="ReturnResult" type="_Co8PMYqrEduD57FOfu5USw" direction="return"/> </ownedOperation> </ownedMember> <ownedMember xmi:type="uml:Interface" xmi:id="_JsxdINFeEdubB7ExQlt35g" name="ICacheListener"> <ownedOperation xmi:id="_MByMsNFeEdubB7ExQlt35g" name="objectEvicted" type="_hcgqEoqrEduD57FOfu5USw"> <returnResult xmi:id="_NXBhUdFeEdubB7ExQlt35g" name="ReturnResult" type="_hcgqEoqrEduD57FOfu5USw" direction="return"/> <ownedParameter xmi:id="_NXBhUNFeEdubB7ExQlt35g" name="obj" type="_w4wJ4NFdEdubB7ExQlt35g"/> </ownedOperation> </ownedMember> <ownedMember xmi:type="uml:Association" xmi:id="_Q_e1wNFeEdubB7ExQlt35g" memberEnd="_Q_e1wdFeEdubB7ExQlt35g _Q_e1wtFeEdubB7ExQlt35g"> <ownedEnd xmi:id="_Q_e1wtFeEdubB7ExQlt35g" visibility="private" type="_Vw11wNDLEdubB7ExQlt35g" association="_Q_e1wNFeEdubB7ExQlt35g"/> </ownedMember> </ownedMember> </uml:Model> |
From: Bryan T. <tho...@us...> - 2007-03-15 16:11:16
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/service In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10595/src/java/com/bigdata/service Added Files: EmbeddedDataService.java NIODataService.java IReadOnlyProcedure.java IReducer.java IProcedure.java DataService.java TransactionService.java OldTransactionServer.java IDataService.java IMapOp.java DataServiceClient.java Log Message: Refactoring to define service apis (data service, transaction manager service) and some approximate implementations of those services (not supporting service discovery, network protocol, or service robustness). Copied in the UML model so that it will actually get committed to CVS.... --- NEW FILE: NIODataService.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Oct 9, 2006 */ package com.bigdata.service; import java.io.IOException; import java.nio.ByteBuffer; import java.util.HashMap; import java.util.Map; import java.util.Properties; import java.util.Queue; import java.util.concurrent.ConcurrentLinkedQueue; /** * The network facing {@link DataService} interface. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @todo Refactor code from the nio test suites. Note that the * {@link DataService} already breaks down the tasks into various thread * pools. This class needs to deploy one thread to accept connections, one * to accept requests (which may be assembled over a series of socket * reads). Once a request is complete, it is handed off to a pool of * worker threads iff it has high latency and otherwise executed * immediately (e.g., an tx abort is low latency and is always executed * directly - pretty much everything else is high latency and needs to be * managed by a worker pool). As worker threads complete, they formulate a * response and then place it on the queue for sending back responses to * clients. * * @todo break down into transaction service that directs 2-3 phase commits on * the data services involved in a given transaction. this will require a * protocol for notifying the transaction service when a client will write * on a data service instance. * * @todo provide a service for moving index partitions around to support load * distribution. * * @todo Support data replication, e.g., via pipelining writes or ROWAA, * including the case with RAM-only segments that gain failover through * replication. */ public class NIODataService { // // /** // * Open journals (indexed by segment). // * // * @todo Change to int32 keys? // * @todo Define Segment object to encapsulate both the Journal and the // * database as well as any metadata associated with the segment, e.g., // * load stats. // */ // Map<Long,Journal> journals = new HashMap<Long, Journal>(); // // /** // * // * @param segment // * @param properties // * @throws IOException // * // * @todo Define protocol for journal startup. // */ // public void openSegment(long segment, Properties properties) throws IOException { // // if( journals.containsKey(segment)) { // // throw new IllegalStateException(); // // } // // // @todo pass segment in when creating/opening a journal. // Journal journal = new Journal(properties); // // journals.put(segment, journal); // // } // // /** // * // * @param segment // * @param properties // * @throws IOException // * // * @todo Define protocol for journal shutdown. // */ // public void closeSegment(long segment,Properties properties) throws IOException { // // Journal journal = journals.remove(segment); // // if( journal == null ) throw new IllegalArgumentException(); // // /* // * @todo This is far to abupt. We have to gracefully shutdown the // * segment (both the journal and the read-optimized database). // */ // journal._bufferStrategy.close(); // // } // // /** // * Models a request from a client that has been read from the wire and is // * ready for processing. // * // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> // * @version $Id$ // * // * @todo define this and define how it relates to client responses. // */ // public static class ClientRequest { // // } // // /** // * Models a response that is read to be send down the wire to a client. // * // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> // * @version $Id$ // * // * @todo define this and define how it relates to client requests. // */ // public static class ClientResponse { // // } // // /** // * @todo create with NO open segments, and then accept requests to receive, // * open, create, send, close, or delete a segment. When opening a // * segment, open both the journal and the database. Keep properties // * for defaults? Server options only? // * // * @todo Work out relationship between per-segment and per-transaction // * request processing. If requests are FIFO per transaction, then that // * has to dominate the queues but we may want to have a set of worker // * threads that allow greater parallism when processing requests in // * different transactions against different segments. // */ // public NIODataService(Properties properties) { // // Queue<ClientRequest> requests = new ConcurrentLinkedQueue<ClientRequest>(); // // Queue<ClientResponse> responses = new ConcurrentLinkedQueue<ClientResponse>(); // // // } // // /** // * The {@link ClientAcceptor} accepts new clients. // * // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> // * @version $Id$ // */ // public class ClientAcceptor extends Thread { // // public ClientAcceptor() { // // super("Client-Acceptor"); // // } // // } // // /** // * The {@link ClientResponder} buffers Read, Write, and Delete requests from // * the client, places them into a per-transaction queue, and notices when // * results are available to send back to the client. // * // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> // * @version $Id$ // * // * @todo Delete requests are basically a special case of write requests, but // * they probably deserve distinction in the protocol. // */ // public class ClientResponder extends Thread { // // public ClientResponder() { // // super("Client-Responder"); // // } // // } // // /** // * The {@link ClientRequestHandler} consumes buffered client requests from a // * per-transaction FIFO queue and places responses onto a queue where they // * are picked up by the {@link ClientResponder} and sent to the client. // * // * @author <a href="mailto:tho...@us...">Bryan Thompson</a> // * @version $Id$ // * // * @todo Reads from the journal must read through any FIFO queue, which // * means indexing the buffered request by transaction, arrival time, // * and objects written. If client requests write directly through then // * we can simplify this logic. However, I believe that we need to be // * able to suspend writes on the journal during commit processing. If // * the client had to block on writes for any transaction, that could // * introduce unacceptable latency. // */ // // public class ClientRequestHandler extends Thread { // // final Journal journal; // // /* // * @todo handshake with journal to make sure that the writer is // * exclusive, e.g., obtaining an exclusive file lock might work. // */ // public ClientRequestHandler(Journal journal) { // // super("Journal-Writer"); // // if (journal == null) // throw new IllegalArgumentException(); // // this.journal = journal; // // } // // /** // * Write request from client. // * // * @param txId // * The transaction identifier. // * @param objId // * The int32 within-segment persistent identifier. // * @param data // * The data to be written. The bytes from // * {@link ByteBuffer#position()} to // * {@link ByteBuffer#limit()} will be written. // */ // public void write(long txId, int objId, ByteBuffer data) { // // ITx transaction = journal.getTx(txId); // // if( transaction == null ) { // // // @todo Send back an error. // // throw new UnsupportedOperationException(); // // } // // transaction.write(objId,data); // // } // // /** // * Read request from the client. // * // * @param txId // * The transaction identifier. // * // * @param objId // * The int32 within-segment persistent identifier. // */ // public void read(long txId, int objId) { // // ITx transaction = journal.getTx(txId); // // if( transaction == null ) { // // // @todo Send back an error. // // throw new UnsupportedOperationException(); // // } // // /* // * @todo If we are doing a row scan or any kind of read-ahead then // * we can buffer the results into a block and send it back along // * with an object map so that the client can slice the individual // * rows out of the block. // */ // ByteBuffer data = transaction.read(objId, null); // // if( data == null ) { // // /* // * FIXME Resolve the object against the database. // */ // throw new UnsupportedOperationException("Read from database"); // // } // // /* // * FIXME Write the data onto a socket to get it back to the client. // */ // // } // // /** // * Delete request from client. // * // * @param txId // * The transaction identifier. // * @param objId // * The int32 within-segment persistent identifier. // */ // public void delete(long txId, int objId) { // // ITx transaction = journal.getTx(txId); // // if( transaction == null ) { // // // @todo Send back an error. // // throw new UnsupportedOperationException(); // // } // // transaction.delete(objId); // // } // // } } --- NEW FILE: DataService.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 14, 2007 */ package com.bigdata.service; import java.util.Properties; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import com.bigdata.journal.AbstractJournal; import com.bigdata.journal.ITx; import com.bigdata.journal.IsolationEnum; import com.bigdata.journal.Journal; import com.bigdata.objndx.BatchContains; import com.bigdata.objndx.BatchInsert; import com.bigdata.objndx.BatchLookup; import com.bigdata.objndx.BatchRemove; import com.bigdata.objndx.IBatchBTree; import com.bigdata.objndx.IBatchOp; import com.bigdata.objndx.IEntryIterator; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.ILinearList; import com.bigdata.objndx.IReadOnlyBatchOp; import com.bigdata.objndx.ISimpleBTree; import com.bigdata.util.concurrent.DaemonThreadFactory; /** * An implementation of a data service suitable for use with RPC, direct client * calls (if decoupled by an operation queue), or a NIO interface. * <p> * This implementation is thread-safe. It will block for each operation. It MUST * be invoked within a pool of request handler threads servicing a network * interface in order to decouple data service operations from client requests. * When using as part of an embedded database, the client operations MUST be * buffered by a thread pool with a FIFO policy so that client requests will be * decoupled from data service operations. * <p> * The {@link #txService} provides concurrency for transaction processing. * <p> * The {@link #opService} provides concurrency for unisolated reads. * <p> * Unisolated writes serialized using * {@link AbstractJournal#serialize(Callable)}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @see NIODataService, which contains some old code that can be refactored for * an NIO interface to the data service. * * @todo add assertOpen() throughout * * @todo declare interface for managing service shutdown()/shutdownNow()? * * @todo support group commit for unisolated writes. i may have to refactor some * to get group commit to work for both transaction commits and unisolated * writes. basically, the tasks on the * {@link AbstractJournal#writeService write service} need to get * aggregated. * * @todo implement NIODataService, RPCDataService(possible), EmbeddedDataService * (uses queue to decouple operations), DataServiceClient (provides * translation from {@link ISimpleBTree} to {@link IBatchBTree}, provides * transparent partitioning of batch operations, handles handshaking and * leases with the metadata index locator service; abstract IO for * different client platforms (e.g., support PHP, C#). Bundle ICU4J with * the client. * * @todo JobScheduler service for map/reduce (or Hadoop integration). * * @todo another data method will need to be defined to support GOM with * pre-fetch. the easy way to do this is to get 50 objects to either side * of the object having the supplied key. This is easy to compute using * the {@link ILinearList} interface. I am not sure about unisolated * operations for GOM.... Isolated operations are straight forward. The * other twist is supporting scalable link sets, link set indices (not * named, unless the identity of the object collecting the link set is * part of the key), and non-OID indices (requires changes to * generic-native). * * @todo Have the {@link DataService} notify the transaction manager when a * write is performed on that service so that all partitipating * {@link DataService} instances will partitipate in a 2-/3-phase commit * (and a simple commit can be used when the transaction write set is * localized on a single dataservice instance). The message needs to be * synchronous each time a new index partition is written on by the client * so that the transaction manager can locate the primary * {@link DataService} instance for the write when it needs to commit or * abort the tx. */ public class DataService implements IDataService { protected Journal journal; /** * Pool of threads for handling unisolated reads. */ final protected ExecutorService readService; /** * Pool of threads for handling concurrent transactions. */ final protected ExecutorService txService; /** * Options understood by the {@link DataService}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public static class Options extends com.bigdata.journal.Options { /** * <code>readServicePoolSize</code> - The #of threads in the pool * handling concurrent unisolated read requests. * * @see #DEFAULT_READ_SERVICE_POOL_SIZE */ public static final String READ_SERVICE_POOL_SIZE = "readServicePoolSize"; /** * The default #of threads in the read service thread pool. */ public final static int DEFAULT_READ_SERVICE_POOL_SIZE = 20; /** * <code>txServicePoolSize</code> - The #of threads in the pool * handling concurrent transactions. * * @see #DEFAULT_TX_SERVICE_POOL_SIZE */ public static final String TX_SERVICE_POOL_SIZE = "txServicePoolSize"; /** * The default #of threads in the transaction service thread pool. */ public final static int DEFAULT_TX_SERVICE_POOL_SIZE = 100; } /** * * @param properties */ public DataService(Properties properties) { String val; final int txServicePoolSize; final int readServicePoolSize; /* * "readServicePoolSize" */ val = properties.getProperty(Options.READ_SERVICE_POOL_SIZE); if (val != null) { readServicePoolSize = Integer.parseInt(val); if (readServicePoolSize < 1 ) { throw new RuntimeException("The '" + Options.READ_SERVICE_POOL_SIZE + "' must be at least one."); } } else readServicePoolSize = Options.DEFAULT_READ_SERVICE_POOL_SIZE; /* * "txServicePoolSize" */ val = properties.getProperty(Options.TX_SERVICE_POOL_SIZE); if (val != null) { txServicePoolSize = Integer.parseInt(val); if (txServicePoolSize < 1 ) { throw new RuntimeException("The '" + Options.TX_SERVICE_POOL_SIZE + "' must be at least one."); } } else txServicePoolSize = Options.DEFAULT_TX_SERVICE_POOL_SIZE; /* * The journal's write service will be used to handle unisolated writes * and transaction commits. * * @todo parameterize for use of scale-up vs scale-out journal impls. */ journal = new Journal(properties); // setup thread pool for unisolated read operations. readService = Executors.newFixedThreadPool(readServicePoolSize, DaemonThreadFactory.defaultThreadFactory()); // setup thread pool for concurrent transactions. txService = Executors.newFixedThreadPool(txServicePoolSize, DaemonThreadFactory.defaultThreadFactory()); } /** * Polite shutdown does not accept new requests and will shutdown once * the existing requests have been processed. */ public void shutdown() { readService.shutdown(); txService.shutdown(); journal.shutdown(); } /** * Shutdown attempts to abort in-progress requests and shutdown as soon * as possible. */ public void shutdownNow() { readService.shutdownNow(); txService.shutdownNow(); journal.close(); } /* * ITxCommitProtocol. */ public void commit(long tx) { // will place task on writeService and block iff necessary. journal.commit(tx); } public void abort(long tx) { // will place task on writeService iff read-write tx. journal.abort(tx); } /* * IDataService. */ private boolean isReadOnly(long startTime) { assert startTime != 0l; ITx tx = journal.getTx(startTime); if (tx == null) { throw new IllegalStateException("Unknown: tx=" + startTime); } return tx.isReadOnly(); } /** * * @todo the state of the op is changed as a side effect and needs to be * communicated back to a remote client. Also, the remote client does * not need to send uninitialized data across the network when the * batch operation will use the data purely for a response - we can * just initialize the data fields on this side of the interface and * then send them back across the network api. */ public void batchOp(long tx, String name, IBatchOp op) throws InterruptedException, ExecutionException { if( name == null ) throw new IllegalArgumentException(); if( op == null ) throw new IllegalArgumentException(); final boolean isolated = tx != 0L; final boolean readOnly = (op instanceof IReadOnlyBatchOp) || (isolated && isReadOnly(tx)); if(isolated) { txService.submit(new TxBatchTask(tx,name,op)).get(); } else if( readOnly ) { readService.submit(new UnisolatedReadBatchTask(name,op)).get(); } else { /* * Special case since incomplete writes MUST be discarded and * complete writes MUST be committed. */ journal.serialize(new UnisolatedBatchReadWriteTask(name,op)).get(); } } public void submit(long tx, IProcedure proc) throws InterruptedException, ExecutionException { if( proc == null ) throw new IllegalArgumentException(); final boolean isolated = tx != 0L; final boolean readOnly = proc instanceof IReadOnlyProcedure; if(isolated) { txService.submit(new TxProcedureTask(tx,proc)).get(); } else if( readOnly ) { readService.submit(new UnisolatedReadProcedureTask(proc)).get(); } else { /* * Special case since incomplete writes MUST be discarded and * complete writes MUST be committed. */ journal.serialize(new UnisolatedReadWriteProcedureTask(proc)).get(); } } public RangeQueryResult rangeQuery(long tx, String name, byte[] fromKey, byte[] toKey, boolean countOnly, boolean keysOnly, boolean valuesOnly) throws InterruptedException, ExecutionException { if( name == null ) throw new IllegalArgumentException(); if (tx == 0L) throw new UnsupportedOperationException( "Unisolated context not allowed"); RangeQueryResult result = (RangeQueryResult)txService.submit( new RangeQueryTask(tx, name, fromKey, toKey, countOnly, keysOnly, valuesOnly)).get(); return result; } /** * @todo if unisolated or isolated at the read-commit level, then the * operation really needs to be broken down by partition or perhaps by * index segment leaf so that we do not have too much latency during a * read (this could be done for rangeQuery as well). * * @todo if fully isolated, then there is no problem running map. * * @todo The definition of a row is different if using a key formed from the * column name, application key, and timestamp. * * @todo For at least GOM we need to deserialize rows from byte[]s, so we * need to have the (de-)serializer to the application level value on * hand. */ public void map(long tx, String name, byte[] fromKey, byte[] toKey, IMapOp op) throws InterruptedException, ExecutionException { if( name == null ) throw new IllegalArgumentException(); if (tx == 0L) throw new UnsupportedOperationException( "Unisolated context not allowed"); RangeQueryResult result = (RangeQueryResult) txService.submit( new RangeQueryTask(tx, name, fromKey, toKey, false, false, false)).get(); // @todo resolve the reducer service. IReducer reducer = null; op.apply(result.itr, reducer); } /** * Abstract class for tasks that execute batch api operations. There are * various concrete subclasses, each of which MUST be submitted to the * appropriate service for execution. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ private abstract class AbstractBatchTask implements Callable<Object> { private final String name; private final IBatchOp op; public AbstractBatchTask(String name, IBatchOp op) { this.name = name; this.op = op; } abstract IIndex getIndex(String name); public Object call() throws Exception { IIndex ndx = getIndex(name); if (ndx == null) throw new IllegalStateException("Index not registered: " + name); if( op instanceof BatchContains ) { ndx.contains((BatchContains) op); } else if( op instanceof BatchLookup ) { ndx.lookup((BatchLookup) op); } else if( op instanceof BatchInsert ) { ndx.insert((BatchInsert) op); } else if( op instanceof BatchRemove ) { ndx.remove((BatchRemove) op); } else { // Extension batch mutation operation. op.apply(ndx); } return null; } } /** * Resolves the named index against the transaction in order to provide * appropriate isolation for reads, read-committed reads, or writes. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @see ITx * * @todo In order to allow multiple clients to do work on the same * transaction at once, we need a means to ensure that the same * transaction is not assigned to more than one thread in the * {@link DataService#txService}. In the absence of clients imposing * a protocol among themselves for this purpose, we can simply * maintain a mapping of transactions to threads. If a transaction is * currently bound to a thread (its callable task is executing) then * the current thread must wait. This protocol can be easily * implemented using thread local variables.<br> * Note: it is possible for this protocol to result in large numbers * of worker threads blocking, but as long as each worker thread makes * progress it should not be possible for the thread pool as a whole * to block. */ private class TxBatchTask extends AbstractBatchTask { private final ITx tx; public TxBatchTask(long startTime, String name, IBatchOp op) { super(name,op); assert startTime != 0L; tx = journal.getTx(startTime); if (tx == null) { throw new IllegalStateException("Unknown tx"); } if (!tx.isActive()) { throw new IllegalStateException("Tx not active"); } } public IIndex getIndex(String name) { return tx.getIndex(name); } } /** * Class used for unisolated <em>read</em> operations. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ private class UnisolatedReadBatchTask extends AbstractBatchTask { public UnisolatedReadBatchTask(String name, IBatchOp op) { super(name,op); } public IIndex getIndex(String name) { return journal.getIndex(name); } } /** * Class used for unisolated <em>write</em> operations. This class * performs the necessary handshaking with the journal to discard partial * writes in the event of an error during processing and to commit after a * successful write operation, thereby providing the ACID contract for an * unisolated write. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ private class UnisolatedBatchReadWriteTask extends UnisolatedReadBatchTask { public UnisolatedBatchReadWriteTask(String name, IBatchOp op) { super(name,op); } protected void abort() { journal.abort(); } public Long call() throws Exception { try { super.call(); // commit (synchronous, immediate). return journal.commit(); } catch(Throwable t) { abort(); throw new RuntimeException(t); } } } private class RangeQueryTask implements Callable<Object> { private final String name; private final byte[] fromKey; private final byte[] toKey; private final boolean countOnly; private final boolean keysOnly; private final boolean valuesOnly; private final ITx tx; public RangeQueryTask(long startTime, String name, byte[] fromKey, byte[] toKey, boolean countOnly, boolean keysOnly, boolean valuesOnly) { assert startTime != 0L; tx = journal.getTx(startTime); if (tx == null) { throw new IllegalStateException("Unknown tx"); } if (!tx.isActive()) { throw new IllegalStateException("Tx not active"); } if( tx.getIsolationLevel() == IsolationEnum.ReadCommitted ) { throw new UnsupportedOperationException("Read-committed not supported"); } this.name = name; this.fromKey = fromKey; this.toKey = toKey; this.countOnly = countOnly; this.keysOnly = keysOnly; this.valuesOnly = valuesOnly; } public IIndex getIndex(String name) { return tx.getIndex(name); } public Object call() throws Exception { IIndex ndx = getIndex(name); final int count = ndx.rangeCount(fromKey, toKey); final IEntryIterator itr = (countOnly ? null : ndx.rangeIterator( fromKey, toKey)); return new RangeQueryResult(count, itr, tx.getStartTimestamp(), name, fromKey, toKey, countOnly, keysOnly, valuesOnly); } } /** * @todo must keep track of the open iterators on the transaction and * invalidate them once the transaction completes. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public static class RangeQueryResult { public final int count; public final IEntryIterator itr; public final long startTime; public final String name; public final byte[] fromKey; public final byte[] toKey; public final boolean countOnly; public final boolean keysOnly; public final boolean valuesOnly; public RangeQueryResult(int count, IEntryIterator itr, long startTime, String name, byte[] fromKey, byte[] toKey, boolean countOnly, boolean keysOnly, boolean valuesOnly) { this.count = count; this.itr = itr; this.startTime = startTime; this.name = name; this.fromKey = fromKey; this.toKey = toKey; this.countOnly = countOnly; this.keysOnly = keysOnly; this.valuesOnly = valuesOnly; } } /** * Abstract class for tasks that execute batch api operations. There are * various concrete subclasses, each of which MUST be submitted to the * appropriate service for execution. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ private abstract class AbstractProcedureTask implements Callable<Object> { protected final IProcedure proc; public AbstractProcedureTask(IProcedure proc) { this.proc = proc; } } /** * Resolves the named index against the transaction in order to provide * appropriate isolation for reads, read-committed reads, or writes. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @see ITx * * @todo In order to allow multiple clients to do work on the same * transaction at once, we need a means to ensure that the same * transaction is not assigned to more than one thread in the * {@link DataService#txService}. In the absence of clients imposing * a protocol among themselves for this purpose, we can simply * maintain a mapping of transactions to threads. If a transaction is * currently bound to a thread (its callable task is executing) then * the current thread must wait. This protocol can be easily * implemented using thread local variables.<br> * Note: it is possible for this protocol to result in large numbers * of worker threads blocking, but as long as each worker thread makes * progress it should not be possible for the thread pool as a whole * to block. */ private class TxProcedureTask extends AbstractProcedureTask { private final ITx tx; public TxProcedureTask(long startTime, IProcedure proc) { super(proc); assert startTime != 0L; tx = journal.getTx(startTime); if (tx == null) { throw new IllegalStateException("Unknown tx"); } if (!tx.isActive()) { throw new IllegalStateException("Tx not active"); } } public Object call() throws Exception { proc.apply(tx.getStartTimestamp(),tx); return null; } } /** * Class used for unisolated <em>read</em> operations. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ private class UnisolatedReadProcedureTask extends AbstractProcedureTask { public UnisolatedReadProcedureTask(IProcedure proc) { super(proc); } public Object call() throws Exception { proc.apply(0L,journal); return null; } } /** * Class used for unisolated <em>write</em> operations. This class * performs the necessary handshaking with the journal to discard partial * writes in the event of an error during processing and to commit after a * successful write operation, thereby providing the ACID contract for an * unisolated write. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ private class UnisolatedReadWriteProcedureTask extends UnisolatedReadProcedureTask { public UnisolatedReadWriteProcedureTask(IProcedure proc) { super(proc); } protected void abort() { journal.abort(); } public Long call() throws Exception { try { super.call(); // commit (synchronous, immediate). return journal.commit(); } catch(Throwable t) { abort(); throw new RuntimeException(t); } } } } --- NEW FILE: IDataService.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 15, 2007 */ package com.bigdata.service; import java.util.concurrent.ExecutionException; import com.bigdata.journal.ITransactionManager; import com.bigdata.journal.ITxCommitProtocol; import com.bigdata.journal.IsolationEnum; import com.bigdata.objndx.BTree; import com.bigdata.objndx.IBatchOp; import com.bigdata.service.DataService.RangeQueryResult; /** * Data service interface. * <p> * The data service exposes the methods on this interface to the client and the * {@link ITxCommitProtocol} methods to the {@link ITransactionManager} service. * <p> * The data service exposes both isolated (transactional) and unisolated batch * operations on scalable named btrees. Transactions are identified by their * start time. BTrees are identified by name. The btree batch API provides for * existence testing, lookup, insert, removal, and an extensible mutation * operation. Other operations exposed by this interface include: remote * procedure execution, key range traversal, and mapping of an operator over a * key range. * <p> * Unisolated processing is broken down into idempotent operation (reads) and * mutation operations (insert, remove, the extensible batch operator, and * remote procedure execution). * <p> * Unisolated writes are serialized and ACID. If an unisolated write succeeds, * then it will commit immediately. If the unisolated write fails, the partial * write on the journal will be discarded. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public interface IDataService extends ITxCommitProtocol { /** * Used by the client to submit a batch operation on a named B+Tree * (synchronous). * <p> * Unisolated operations SHOULD be used to achieve "auto-commit" semantics. * Fully isolated transactions are useful IFF multiple operations must be * composed into a ACID unit. * <p> * While unisolated batch operations on a single data service are ACID, * clients are required to locate all index partitions for the logical * operation and distribute their operation across the distinct data service * instances holding the affected index partitions. In practice, this means * that contract for ACID unisolated operations is limited to operations * where the data is located on a single data service instance. For ACID * operations that cross multiple data service instances the client MUST use * a fully isolated transaction. While read-committed transactions impose * low system overhead, clients interested in the higher possible total * throughput SHOULD choose unisolated read operations in preference to a * read-committed transaction. * <p> * This method is thread-safe. It will block for each operation. It should * be invoked within a pool request handler threads servicing a network * interface and thereby decoupling data service operations from client * requests. When using as part of an embedded database, the client * operations MUST be buffered by a thread pool with a FIFO policy so that * client requests may be decoupled from data service operations. * * @param tx * The transaction identifier -or- zero (0L) IFF the operation is * NOT isolated by a transaction. * @param name * The index name (required). * @param op * The batch operation. * * @exception InterruptedException * if the operation was interrupted (typically by * {@link #shutdownNow()}. * @exception ExecutionException * If the operation caused an error. See * {@link ExecutionException#getCause()} for the underlying * error. */ public void batchOp(long tx, String name, IBatchOp op) throws InterruptedException, ExecutionException; /** * Submit a procedure. * <p> * <p> * Unisolated operations SHOULD be used to achieve "auto-commit" semantics. * Fully isolated transactions are useful IFF multiple operations must be * composed into a ACID unit. * <p> * While unisolated batch operations on a single data service are ACID, * clients are required to locate all index partitions for the logical * operation and distribute their operation across the distinct data service * instances holding the affected index partitions. In practice, this means * that contract for ACID unisolated operations is limited to operations * where the data is located on a single data service instance. For ACID * operations that cross multiple data service instances the client MUST use * a fully isolated transaction. While read-committed transactions impose * low system overhead, clients interested in the higher possible total * throughput SHOULD choose unisolated read operations in preference to a * read-committed transaction. * * @param tx * The transaction identifier -or- zero (0L) IFF the operation is * NOT isolated by a transaction. * @param proc * The procedure to be executed. * * @throws InterruptedException * @throws ExecutionException */ public void submit(long tx, IProcedure proc) throws InterruptedException, ExecutionException; /** * Streaming traversal of keys and/or values in a given key range. * <p> * Note: The rangeQuery operation is NOT allowed for either unisolated reads * or read-committed transactions (the underlying constraint is that the * {@link BTree} does NOT support traversal under concurrent modification * this operation is limited to read-only or fully isolated transactions). * * @param tx * @param name * @param fromKey * @param toKey * @param countOnly * @param keysOnly * @param valuesOnly * * @exception InterruptedException * if the operation was interrupted (typically by * {@link #shutdownNow()}. * @exception ExecutionException * If the operation caused an error. See * {@link ExecutionException#getCause()} for the underlying * error. * @exception UnsupportedOperationException * If the tx is zero (0L) (indicating an unisolated * operation) -or- if the identifed transaction is * {@link IsolationEnum#ReadCommitted}. */ public RangeQueryResult rangeQuery(long tx, String name, byte[] fromKey, byte[] toKey, boolean countOnly, boolean keysOnly, boolean valuesOnly) throws InterruptedException, ExecutionException; /** * Maps an operation against all key/value pairs in a key range, writing the * result onto a reducer service. */ public void map(long tx, String name, byte[] fromKey, byte[] toKey, IMapOp op) throws InterruptedException, ExecutionException; } --- NEW FILE: EmbeddedDataService.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 15, 2007 */ package com.bigdata.service; import java.util.Properties; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import com.bigdata.objndx.IBatchOp; import com.bigdata.service.DataService.RangeQueryResult; import com.bigdata.util.concurrent.DaemonThreadFactory; /** * Implementation suitable for a standalone embedded database. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @todo either decouple client operations from the data service (they are * synchronous) or drop this class and just use DataService directly for * an embedded data service (or potentially integrate a * {@link DataService} and {@link TransactionService} instance together * using delegation patterns). */ public class EmbeddedDataService implements IDataService { private final DataService delegate; /** * Pool of threads for decoupling client operations. */ final protected ExecutorService opService; public EmbeddedDataService(Properties properties) { delegate = new DataService(properties); // setup thread pool for decoupling client operations. opService = Executors.newFixedThreadPool(100, DaemonThreadFactory .defaultThreadFactory()); } /** * Polite shutdown does not accept new requests and will shutdown once * the existing requests have been processed. */ public void shutdown() { delegate.shutdown(); } /** * Shutdown attempts to abort in-progress requests and shutdown as soon * as possible. */ public void shutdownNow() { delegate.shutdownNow(); } public void batchOp(long tx, String name, IBatchOp op) throws InterruptedException, ExecutionException { delegate.batchOp(tx, name, op); } public void map(long tx, String name, byte[] fromKey, byte[] toKey, IMapOp op) throws InterruptedException, ExecutionException { delegate.map(tx, name, fromKey, toKey, op); } public RangeQueryResult rangeQuery(long tx, String name, byte[] fromKey, byte[] toKey, boolean countOnly, boolean keysOnly, boolean valuesOnly) throws InterruptedException, ExecutionException { return delegate.rangeQuery(tx, name, fromKey, toKey, countOnly, keysOnly, valuesOnly); } public void submit(long tx, IProcedure proc) throws InterruptedException, ExecutionException { delegate.submit(tx, proc); } public void abort(long tx) { delegate.abort(tx); } public void commit(long tx) { delegate.commit(tx); } } --- NEW FILE: DataServiceClient.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 15, 2007 */ package com.bigdata.service; import java.util.Properties; import java.util.concurrent.ExecutionException; import com.bigdata.objndx.IBatchOp; import com.bigdata.service.DataService.RangeQueryResult; /** * Client facing interface for communications with a data service. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @todo provide implementations for embedded vs remote data service instances. * Only remote data service instances discovered via the metadata locator * service will support a scale-out solution. */ public class DataServiceClient implements IDataService { final IDataService delegate; public DataServiceClient(Properties properties) { // @todo provide option for other kinds of connections. delegate = new EmbeddedDataServiceClient(properties); } private class EmbeddedDataServiceClient extends DataService { EmbeddedDataServiceClient(Properties properties) { super(properties); } } /* * @todo implement remote data service client talking to NIO service * instance. this needs to locate the transaction manager service and * the metadata service for each index used by the client. */ abstract private class NIODataServiceClient implements IDataService { } /** * Polite shutdown does not accept new requests and will shutdown once * the existing requests have been processed. */ public void shutdown() { ((DataService)delegate).shutdown(); } /** * Shutdown attempts to abort in-progress requests and shutdown as soon * as possible. */ public void shutdownNow() { ((DataService)delegate).shutdownNow(); } public void batchOp(long tx, String name, IBatchOp op) throws InterruptedException, ExecutionException { delegate.batchOp(tx, name, op); } public void map(long tx, String name, byte[] fromKey, byte[] toKey, IMapOp op) throws InterruptedException, ExecutionException { delegate.map(tx, name, fromKey, toKey, op); } public RangeQueryResult rangeQuery(long tx, String name, byte[] fromKey, byte[] toKey, boolean countOnly, boolean keysOnly, boolean valuesOnly) throws InterruptedException, ExecutionException { return delegate.rangeQuery(tx, name, fromKey, toKey, countOnly, keysOnly, valuesOnly); } public void submit(long tx, IProcedure proc) throws InterruptedException, ExecutionException { delegate.submit(tx, proc); } public void abort(long tx) { delegate.abort(tx); } public void commit(long tx) { delegate.commit(tx); } } --- NEW FILE: IProcedure.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Mar 15, 2007 */ package com.bigdata.service; import com.bigdata.journal.IIndexStore; import com.bigdata.journal.IJournal; import com.bigdata.journal.ITx; /** * A procedure to be executed on an {@link IDataService}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public interface IProcedure { /** * Run the procedure. * <p> * Unisolated procedures have "auto-commit" ACID properties for the local * {@link IDataService} on which they execute, but DO NOT have distributed * ACID properties. In order for a distributed procedure to be ACID, the * procedure MUST be fully isolated. * * @param tx * The transaction identifier (aka start time) -or- zero (0L) IFF * this is an unisolationed operation. * @param store * The store against which writes will be made. If the procedure * is running inside of a transaction, then this will be an * {@link ITx}. If the procedure is running unisolated, then * this will be an {@link IJournal}. */ public void apply(long tx,IIndexStore store); } --- NEW FILE: TransactionService.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use th... [truncated message content] |
From: Bryan T. <tho...@us...> - 2007-03-15 16:09:55
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/client In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10572/src/java/com/bigdata/client Log Message: Directory /cvsroot/cweb/bigdata/src/java/com/bigdata/client added to the repository |
From: Bryan T. <tho...@us...> - 2007-03-15 16:09:55
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/service In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv10572/src/java/com/bigdata/service Log Message: Directory /cvsroot/cweb/bigdata/src/java/com/bigdata/service added to the repository |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:19
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/java/com/bigdata/journal Modified Files: CommitRecord.java Name2Addr.java ReadCommittedTx.java Tx.java CommitRecordIndex.java IAtomicStore.java Journal.java TemporaryStore.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: CommitRecord.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/CommitRecord.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** CommitRecord.java 8 Mar 2007 18:14:06 -0000 1.3 --- CommitRecord.java 12 Mar 2007 18:06:11 -0000 1.4 *************** *** 48,51 **** --- 48,52 ---- package com.bigdata.journal; + /** * A read-only view of an {@link ICommitRecord}. Index: Name2Addr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Name2Addr.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** Name2Addr.java 11 Mar 2007 11:42:45 -0000 1.5 --- Name2Addr.java 12 Mar 2007 18:06:11 -0000 1.6 *************** *** 25,28 **** --- 25,38 ---- * known {@link Addr address} of the {@link BTreeMetadata} record for the named * index. + * <p> + * Note: The {@link Journal} maintains an instance of this class that evolves + * with each {@link Journal#commit()}. However, the journal also makes use of + * historical states for the {@link Name2Addr} index in order to resolve the + * historical state of a named index. Of necessity, the {@link Name2Addr} + * objects used for this latter purpose MUST be distinct from the evolving + * instance otherwise the current version of the named index would be resolved. + * Note further that the historical {@link Name2Addr} states are accessed using + * a canonicalizing mapping but that current evolving {@link Name2Addr} instance + * is NOT part of that mapping. */ public class Name2Addr extends BTree { *************** *** 35,67 **** * synchonization interface for multi-threaded use or move an instance onto * the store since it already has a single-threaded contract for its api? ! */ private KeyBuilder keyBuilder = new KeyBuilder(); /** ! * Cache of added/retrieved btrees. Only btrees found in this cache are ! * candidates for the commit protocol. The use of this cache also minimizes ! * the use of the {@link #keyBuilder} and therefore minimizes the relatively ! * expensive operation of encoding unicode names to byte[] keys. * ! * FIXME This is the place to solve the resource (RAM) burden for indices is ! * Name2Addr. Currently, indices are never closed once opened which is a ! * resource leak. We need to close them out eventually based on LRU plus ! * timeout plus NOT IN USE. The way to approach this is a weak reference ! * cache combined with an LRU or hard reference queue that tracks reference ! * counters (just like the BTree hard reference cache for leaves). Eviction ! * events lead to closing an index iff the reference counter is zero. ! * Touches keep recently used indices from closing even though they may have ! * a zero reference count. */ ! private Map<String,IIndex> name2BTree = new HashMap<String,IIndex>(); ! public Name2Addr(IRawStore store) { super(store, DEFAULT_BRANCHING_FACTOR, ValueSerializer.INSTANCE); } /** ! * Load from the store. * * @param store --- 45,81 ---- * synchonization interface for multi-threaded use or move an instance onto * the store since it already has a single-threaded contract for its api? ! */ private KeyBuilder keyBuilder = new KeyBuilder(); /** ! * Cache of added/retrieved btrees by _name_. This cache is ONLY used by the ! * "live" {@link Name2Addr} instance. Only the indices found in this cache ! * are candidates for the commit protocol. The use of this cache also ! * minimizes the use of the {@link #keyBuilder} and therefore minimizes the ! * relatively expensive operation of encoding unicode names to byte[] keys. * ! * FIXME This never lets go of an unisolated index once it has been looked ! * up and placed into this cache. Therefore, modify this class to use a ! * combination of a weak value cache (so that we can let go of the ! * unisolated named indices) and a commit list (so that we have hard ! * references to any dirty indices up to the next invocation of ! * {@link #handleCommit()}. This will require a protocol by which we notice ! * writes on the named index, probably using a listener API so that we do ! * not have to wrap up the index as a different kind of object. */ ! private Map<String, IIndex> indexCache = new HashMap<String, IIndex>(); ! ! // protected final Journal journal; ! public Name2Addr(IRawStore store) { super(store, DEFAULT_BRANCHING_FACTOR, ValueSerializer.INSTANCE); + // this.journal = store; + } /** ! * Load from the store (de-serialization constructor). * * @param store *************** *** 74,77 **** --- 88,93 ---- super(store, metadata); + // this.journal = store; + } *************** *** 92,96 **** * current commit. */ ! Iterator<Map.Entry<String,IIndex>> itr = name2BTree.entrySet().iterator(); while(itr.hasNext()) { --- 108,112 ---- * current commit. */ ! Iterator<Map.Entry<String,IIndex>> itr = indexCache.entrySet().iterator(); while(itr.hasNext()) { *************** *** 108,111 **** --- 124,130 ---- insert(getKey(name),new Entry(name,addr)); + // // place into the object cache on the journal. + // journal.touch(addr, btree); + } *************** *** 141,145 **** public IIndex get(String name) { ! IIndex btree = name2BTree.get(name); if (btree != null) --- 160,164 ---- public IIndex get(String name) { ! IIndex btree = indexCache.get(name); if (btree != null) *************** *** 154,162 **** } // re-load btree from the store. btree = BTree.load(this.store, entry.addr); ! // save name -> btree mapping in transient cache. ! name2BTree.put(name,btree); // return btree. --- 173,187 ---- } + // /* + // * Reload the index from the store using the object cache to ensure a + // * canonicalizing mapping. + // */ + // btree = journal.getIndex(entry.addr); + // re-load btree from the store. btree = BTree.load(this.store, entry.addr); ! // save name -> btree mapping in transient cache. ! indexCache.put(name,btree); // return btree. *************** *** 164,167 **** --- 189,246 ---- } + + /** + * Return the {@link Addr address} from which the historical state of the + * named index may be loaded. + * <p> + * Note: This is a lower-level access mechanism that is used by + * {@link Journal#getIndex(String, ICommitRecord)} when accessing historical + * named indices from an {@link ICommitRecord}. + * + * @param name + * The index name. + * + * @return The address or <code>0L</code> if the named index was not + * registered. + */ + protected long getAddr(String name) { + + /* + * Note: This uses a private cache to reduce the Unicode -> key + * translation burden. We can not use the normal cache since that maps + * the name to the index and we have to return the address not the index + * in order to support a canonicalizing mapping in the Journal. + */ + synchronized (addrCache) { + + Long addr = addrCache.get(name); + + if (addr == null) { + + final Entry entry = (Entry) super.lookup(getKey(name)); + + if (entry == null) { + + addr = 0L; + + } else { + + addr = entry.addr; + + } + + addrCache.put(name, addr); + + } + + return addr; + + } + + } + /** + * A private cache used only by {@link #getAddr(String)}. + */ + private HashMap<String/* name */, Long/* Addr */> addrCache = new HashMap<String, Long>(); /** *************** *** 210,215 **** super.insert(key,new Entry(name,addr)); // add name -> btree mapping to the transient cache. ! name2BTree.put(name, btree); } --- 289,297 ---- super.insert(key,new Entry(name,addr)); + // // touch the btree in the journal's object cache. + // journal.touch(addr, btree); + // add name -> btree mapping to the transient cache. ! indexCache.put(name, btree); } *************** *** 234,238 **** // remove the name -> btree mapping to the transient cache. ! name2BTree.remove(name); // remove the entry from the persistent index. --- 316,320 ---- // remove the name -> btree mapping to the transient cache. ! indexCache.remove(name); // remove the entry from the persistent index. Index: IAtomicStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IAtomicStore.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** IAtomicStore.java 8 Mar 2007 18:14:07 -0000 1.5 --- IAtomicStore.java 12 Mar 2007 18:06:11 -0000 1.6 *************** *** 133,138 **** * the basis for its operations. * ! * @param timestamp ! * Typically, the timestamp assigned to a transaction. * * @return The {@link ICommitRecord} for the most recent committed state --- 133,138 ---- * the basis for its operations. * ! * @param commitTime ! * Typically, the commit time assigned to a transaction. * * @return The {@link ICommitRecord} for the most recent committed state Index: Journal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Journal.java,v retrieving revision 1.60 retrieving revision 1.61 diff -C2 -d -r1.60 -r1.61 *** Journal.java 11 Mar 2007 11:42:46 -0000 1.60 --- Journal.java 12 Mar 2007 18:06:11 -0000 1.61 *************** *** 62,71 **** --- 62,79 ---- import org.apache.log4j.Logger; + import com.bigdata.cache.LRUCache; + import com.bigdata.cache.WeakValueCache; + import com.bigdata.isolation.ReadOnlyIsolatedIndex; import com.bigdata.isolation.UnisolatedBTree; + import com.bigdata.journal.ReadCommittedTx.ReadCommittedIndex; import com.bigdata.objndx.BTree; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IndexSegment; + import com.bigdata.objndx.ReadOnlyIndex; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; + import com.bigdata.rawstore.IRawStore; + import com.bigdata.scaleup.MasterJournal; + import com.bigdata.scaleup.SlaveJournal; import com.bigdata.scaleup.MasterJournal.Options; import com.bigdata.util.concurrent.DaemonThreadFactory; *************** *** 1127,1130 **** --- 1135,1140 ---- final public long getRootAddr(int index) { + assertOpen(); + if (_commitRecord == null) { *************** *** 1298,1301 **** --- 1308,1313 ---- public IIndex registerIndex(String name, IIndex btree) { + assertOpen(); + if (getIndex(name) != null) { *************** *** 1314,1317 **** --- 1326,1331 ---- public void dropIndex(String name) { + assertOpen(); + name2Addr.dropIndex(name); *************** *** 1327,1330 **** --- 1341,1346 ---- public IIndex getIndex(String name) { + assertOpen(); + if (name == null) throw new IllegalArgumentException(); *************** *** 1341,1370 **** * handling this. */ ! public ICommitRecord getCommitRecord(long comitTime) { ! ! return _commitRecordIndex.find(comitTime); } /** ! * Returns a read-only named index loaded from the given root block. Writes ! * on the index will NOT be made persistent and the index will NOT ! * participate in commits. */ ! public IIndex getIndex(String name, ICommitRecord commitRecord) { /* ! * Note: since this is always a request for historical read-only data, ! * this method MUST NOT register a committer and the returned btree MUST ! * NOT participate in the commit protocol. ! * ! * @todo cache these results in a weak value cache. */ ! return ((Name2Addr) BTree.load(this, commitRecord ! .getRootAddr(ROOT_NAME2ADDR))).get(name); } /* * ITransactionManager and friends. --- 1357,1522 ---- * handling this. */ ! public ICommitRecord getCommitRecord(long commitTime) { ! ! assertOpen(); ! ! return _commitRecordIndex.find(commitTime); } /** ! * Returns a read-only named index loaded from the given root block. This ! * method imposes a canonicalizing mapping and contracts that there will be ! * at most one instance of the historical index at a time. This contract is ! * used to facilitate buffer management. Writes on the index will NOT be ! * made persistent and the index will NOT participate in commits. ! * <p> ! * Note: since this is always a request for historical read-only data, this ! * method MUST NOT register a committer and the returned btree MUST NOT ! * participate in the commit protocol. ! * <p> ! * Note: The caller MUST take care not to permit writes since they could be ! * visible to other users of the same read-only index. This is typically ! * accomplished by wrapping the returned object in class that will throw an ! * exception for writes such as {@link ReadOnlyIndex}, ! * {@link ReadOnlyIsolatedIndex}, or {@link ReadCommittedIndex}. */ ! protected IIndex getIndex(String name, ICommitRecord commitRecord) { ! ! assertOpen(); ! ! if(name == null) throw new IllegalArgumentException(); + if(commitRecord == null) throw new IllegalArgumentException(); + /* ! * The address of an historical Name2Addr mapping used to resolve named ! * indices for the historical state associated with this commit record. */ + final long metaAddr = commitRecord.getRootAddr(ROOT_NAME2ADDR); ! /* ! * Resolve the address of the historical Name2Addr object using the ! * canonicalizing object cache. This prevents multiple historical ! * Name2Addr objects springing into existance for the same commit ! * record. ! */ ! Name2Addr name2Addr = (Name2Addr)getIndex(metaAddr); ! ! /* ! * The address at which the named index was written for that historical ! * state. ! */ ! final long indexAddr = name2Addr.getAddr(name); ! ! // No such index by name for that historical state. ! if(indexAddr==0L) return null; ! ! /* ! * Resolve the named index using the object cache to impose a ! * canonicalizing mapping on the historical named indices based on the ! * address on which it was written in the store. ! */ ! return getIndex(indexAddr); ! ! } ! ! /** ! * A cache that is used by the {@link Journal} to provide a canonicalizing ! * mapping from an {@link Addr address} to the instance of a read-only ! * historical object loaded from that {@link Addr address}. ! * <p> ! * Note: the "live" version of an object MUST NOT be placed into this cache ! * since its state will continue to evolve with additional writes while the ! * cache is intended to provide a canonicalizing mapping to only the ! * historical states of the object. This means that objects such as indices ! * and the {@link Name2Addr} index MUST NOT be inserted into the cache if ! * the are being read from the store for "live" use. For this reason ! * {@link Name2Addr} uses its own caching mechanisms. ! * ! * @todo discard cache on abort? that should not be necessary. even through ! * it can contain objects whose addresses were not made restart safe ! * those addresses should not be accessible to the application and ! * hence the objects should never be looked up and will be evicted in ! * due time from the cache. (this does rely on the fact that the store ! * never reuses an address.) ! * ! * FIXME This is the place to solve the resource (RAM) burden for indices is ! * Name2Addr. Currently, indices are never closed once opened which is a ! * resource leak. We need to close them out eventually based on LRU plus ! * timeout plus NOT IN USE. The way to approach this is a weak reference ! * cache combined with an LRU or hard reference queue that tracks reference ! * counters (just like the BTree hard reference cache for leaves). Eviction ! * events lead to closing an index iff the reference counter is zero. ! * Touches keep recently used indices from closing even though they may have ! * a zero reference count. ! * ! * @todo the {@link MasterJournal} needs to do similar things with ! * {@link IndexSegment}. ! * ! * @todo review the metadata index lookup in the {@link SlaveJournal}. This ! * is a somewhat different case since we only need to work with the ! * current metadata index as along as we make sure not to reclaim ! * resources (journals and index segments) until there are no more ! * transactions which can read from them. ! * ! * @todo support metering of index resources and timeout based shutdown of ! * indices. note that the "live" {@link Name2Addr} has its own cache ! * for the unisolated indices and that metering needs to pay attention ! * to the indices in that cache as well. Also, those indices can be ! * shutdown as long as they are not dirty (pending a commit). ! */ ! final private WeakValueCache<Long, ICommitter> objectCache = new WeakValueCache<Long, ICommitter>( ! new LRUCache<Long, ICommitter>(20)); ! ! /** ! * A canonicalizing mapping for index objects. ! * ! * @param addr ! * The {@link Addr address} of the index object. ! * ! * @return The index object. ! */ ! final protected IIndex getIndex(long addr) { ! ! synchronized (objectCache) { ! ! IIndex obj = (IIndex) objectCache.get(addr); + if (obj == null) { + + obj = BTree.load(this,addr); + + } + + objectCache.put(addr, (ICommitter)obj, false/*dirty*/); + + return obj; + + } + } + // /** + // * Insert or touch an object in the object cache. + // * + // * @param addr + // * The {@link Addr address} of the object in the store. + // * @param obj + // * The object. + // * + // * @see #getIndex(long), which provides a canonicalizing mapping for index + // * objects using the object cache. + // */ + // final protected void touch(long addr,Object obj) { + // + // synchronized(objectCache) { + // + // objectCache.put(addr, (ICommitter)obj, false/*dirty*/); + // + // } + // + // } + /* * ITransactionManager and friends. Index: TemporaryStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TemporaryStore.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TemporaryStore.java 8 Mar 2007 18:14:06 -0000 1.7 --- TemporaryStore.java 12 Mar 2007 18:06:11 -0000 1.8 *************** *** 48,52 **** package com.bigdata.journal; - import com.bigdata.objndx.AbstractBTree; import com.bigdata.objndx.BTree; import com.bigdata.objndx.ByteArrayValueSerializer; --- 48,51 ---- Index: Tx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Tx.java,v retrieving revision 1.35 retrieving revision 1.36 diff -C2 -d -r1.35 -r1.36 *** Tx.java 11 Mar 2007 11:42:44 -0000 1.35 --- Tx.java 12 Mar 2007 18:06:11 -0000 1.36 *************** *** 58,64 **** import com.bigdata.objndx.BTree; import com.bigdata.objndx.IIndex; - import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Bytes; - import com.bigdata.scaleup.MetadataIndex; import com.bigdata.scaleup.PartitionedIndexView; --- 58,62 ---- *************** *** 360,363 **** --- 358,362 ---- } else { + // writeable index backed by the temp. store. index = new IsolatedBTree(tmpStore,src); Index: ReadCommittedTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ReadCommittedTx.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ReadCommittedTx.java 28 Feb 2007 13:59:09 -0000 1.1 --- ReadCommittedTx.java 12 Mar 2007 18:06:11 -0000 1.2 *************** *** 212,216 **** this.commitRecord = currentCommitRecord; ! // lookup the current index view against that commit record. this.index = (IIsolatableIndex) tx.journal.getIndex(name, commitRecord); --- 212,219 ---- this.commitRecord = currentCommitRecord; ! /* ! * Lookup the current committed index view against that commit ! * record. ! */ this.index = (IIsolatableIndex) tx.journal.getIndex(name, commitRecord); Index: CommitRecordIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** CommitRecordIndex.java 21 Feb 2007 20:17:21 -0000 1.2 --- CommitRecordIndex.java 12 Mar 2007 18:06:11 -0000 1.3 *************** *** 7,10 **** --- 7,12 ---- import org.CognitiveWeb.extser.LongPacker; + import com.bigdata.cache.LRUCache; + import com.bigdata.cache.WeakValueCache; import com.bigdata.objndx.BTree; import com.bigdata.objndx.BTreeMetadata; *************** *** 18,22 **** * integers. The values are {@link Entry} objects recording the commit time of * the index and the {@link Addr address} of the {@link ICommitRecord} for that ! * commit time. */ public class CommitRecordIndex extends BTree { --- 20,27 ---- * integers. The values are {@link Entry} objects recording the commit time of * the index and the {@link Addr address} of the {@link ICommitRecord} for that ! * commit time. A canonicalizing cache is maintained such that the caller will ! * never observe distinct concurrent instances of the same {@link ICommitRecord}. ! * This in turn facilitates canonicalizing caches for objects loaded from that ! * {@link ICommitRecord}. */ public class CommitRecordIndex extends BTree { *************** *** 26,38 **** */ private KeyBuilder keyBuilder = new KeyBuilder(); ! ! // /** ! // * Cache of added/retrieved commit records. ! // * ! // * @todo This only works for exact timestamp matches so the cache might not ! // * be very useful here. Also, this must be a weak value cache or it will ! // * leak memory. ! // */ ! // private Map<Long, ICommitRecord> cache = new HashMap<Long, ICommitRecord>(); public CommitRecordIndex(IRawStore store) { --- 31,50 ---- */ private KeyBuilder keyBuilder = new KeyBuilder(); ! ! /** ! * A weak value cache for {@link ICommitRecord}s. Note that lookup may be ! * by exact match -or- by the record have the largest timestamp LTE to the ! * given probe. For the latter, we have to determine the timestamp of the ! * matching record and use that to test the cache (after we have already ! * done the lookup in the index). ! * <p> ! * The main purpose of this cache is not to speed up access to historical ! * {@link ICommitRecord}s but rather to establish a canonicalizing lookup ! * service for {@link ICommitRecord}s so that we can in turn establish a ! * canonicalizing mapping for read-only objects (typically the unnamed ! * indices) loaded from a given {@link ICommitRecord}. ! */ ! final private WeakValueCache<Long, ICommitRecord> cache = new WeakValueCache<Long, ICommitRecord>( ! new LRUCache<Long, ICommitRecord>(10)); public CommitRecordIndex(IRawStore store) { *************** *** 74,83 **** /** ! * Existence test for a commit record with the specified commit timestamp. * * @param commitTime * The commit timestamp. * @return true iff such an {@link ICommitRecord} exists in the index with ! * that commit timestamp. */ synchronized public boolean hasTimestamp(long commitTime) { --- 86,97 ---- /** ! * Existence test for a commit record with the specified commit timestamp ! * (exact match). * * @param commitTime * The commit timestamp. + * * @return true iff such an {@link ICommitRecord} exists in the index with ! * that commit timestamp (exact match(. */ synchronized public boolean hasTimestamp(long commitTime) { *************** *** 89,94 **** /** * Return the {@link ICommitRecord} with the given timestamp (exact match). - * This method tests a cache of the named btrees and will return the same - * instance if the index is found in the cache. * * @param commitTime --- 103,106 ---- *************** *** 101,110 **** ICommitRecord commitRecord; ! // commitRecord = cache.get(commitTime); ! // ! // if (commitRecord != null) ! // return commitRecord; final Entry entry = (Entry) super.lookup(getKey(commitTime)); --- 113,124 ---- ICommitRecord commitRecord; + + // exact match cache test. + commitRecord = cache.get(commitTime); ! if (commitRecord != null) ! return commitRecord; + // exact match index lookup. final Entry entry = (Entry) super.lookup(getKey(commitTime)); *************** *** 120,129 **** commitRecord = loadCommitRecord(store,entry.addr); ! // /* ! // * save commit time -> commit record mapping in transient cache (this ! // * only works for for exact matches on the timestamp so the cache may ! // * not be very useful here). ! // */ ! // cache.put(commitRecord.getTimestamp(),commitRecord); // return btree. --- 134,141 ---- commitRecord = loadCommitRecord(store,entry.addr); ! /* ! * save commit time -> commit record mapping in transient cache. ! */ ! cache.put(commitRecord.getTimestamp(),commitRecord,false/*dirty*/); // return btree. *************** *** 134,138 **** /** * Return the {@link ICommitRecord} having the largest timestamp that is ! * strictly less than the given timestamp. * * @param timestamp --- 146,154 ---- /** * Return the {@link ICommitRecord} having the largest timestamp that is ! * less than or equal to the given timestamp. This is used primarily to ! * locate the commit record that will serve as the ground state for a ! * transaction having <i>timestamp</i> as its start time. In this context ! * the LTE search identifies the most recent commit state that is never the ! * less earlier than the start time of the transaction. * * @param timestamp *************** *** 140,144 **** * * @return The commit record -or- <code>null</code> iff there are no ! * {@link ICommitRecord}s in the that satisify the probe. * * @see #get(long) --- 156,160 ---- * * @return The commit record -or- <code>null</code> iff there are no ! * {@link ICommitRecord}s in the index that satisify the probe. * * @see #get(long) *************** *** 146,149 **** --- 162,166 ---- synchronized public ICommitRecord find(long timestamp) { + // find (first less than or equal to). final int index = findIndexOf(timestamp); *************** *** 156,164 **** } ! // return the matched record. ! Entry entry = (Entry) super.valueAt( index ); ! return loadCommitRecord(store,entry.addr); } --- 173,207 ---- } ! /* ! * Retrieve the entry for the commit record from the index. This ! * also stores the actual commit time for the commit record. ! */ Entry entry = (Entry) super.valueAt( index ); + + /* + * Test the cache for this commit record using its actual commit time. + */ + ICommitRecord commitRecord = cache.get(entry.commitTime); + + if(commitRecord == null) { + + /* + * Load the commit record from the store using the address stored in + * the entry. + */ ! commitRecord = loadCommitRecord(store,entry.addr); ! ! assert entry.commitTime == commitRecord.getTimestamp(); ! ! /* ! * Insert the commit record into the cache usings its actual commit ! * time. ! */ ! cache.put(entry.commitTime,commitRecord,false/* dirty */); ! ! } ! ! return commitRecord; } *************** *** 268,273 **** super.insert(key,new Entry(commitTime,commitRecordAddr)); ! // // add to the transient cache. ! // cache.put(commitTime, commitRecord); } --- 311,319 ---- super.insert(key,new Entry(commitTime,commitRecordAddr)); ! // should not be an existing entry for that commit time. ! assert cache.get(commitTime) == null; ! ! // add to the transient cache. ! cache.put(commitTime, commitRecord, false/*dirty*/); } |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:19
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/java/com/bigdata/scaleup Modified Files: IsolatablePartitionedIndexView.java Name2MetadataAddr.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: Name2MetadataAddr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/Name2MetadataAddr.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** Name2MetadataAddr.java 11 Mar 2007 11:42:41 -0000 1.4 --- Name2MetadataAddr.java 12 Mar 2007 18:06:12 -0000 1.5 *************** *** 67,70 **** --- 67,75 ---- } + /** + * Deserialization constructor. + * @param store + * @param metadata + */ public Name2MetadataAddr(IRawStore store, BTreeMetadata metadata) { Index: IsolatablePartitionedIndexView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/IsolatablePartitionedIndexView.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** IsolatablePartitionedIndexView.java 11 Mar 2007 11:42:42 -0000 1.1 --- IsolatablePartitionedIndexView.java 12 Mar 2007 18:06:12 -0000 1.2 *************** *** 73,77 **** public IsolatablePartitionedIndexView(UnisolatedBTree btree, MetadataIndex mdi) { super(btree, mdi); ! throw new UnsupportedOperationException(); } --- 73,77 ---- public IsolatablePartitionedIndexView(UnisolatedBTree btree, MetadataIndex mdi) { super(btree, mdi); ! // throw new UnsupportedOperationException(); } |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:18
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/cache In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/java/com/bigdata/cache Modified Files: WeakValueCache.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: WeakValueCache.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/cache/WeakValueCache.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** WeakValueCache.java 8 Nov 2006 19:20:47 -0000 1.2 --- WeakValueCache.java 12 Mar 2007 18:06:12 -0000 1.3 *************** *** 272,276 **** _loadFactor = loadFactor; ! _cache = new HashMap<K,IWeakRefCacheEntry<K,T>>( initialCapacity, loadFactor ); --- 272,280 ---- _loadFactor = loadFactor; ! ! /* ! * @todo Consider changing to a concurrent hash map. Would this let us ! * remove some of the synchronization constraints? Probably not. ! */ _cache = new HashMap<K,IWeakRefCacheEntry<K,T>>( initialCapacity, loadFactor ); |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:17
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/test/com/bigdata/objndx Modified Files: TestNodeSerializer.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: TestNodeSerializer.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestNodeSerializer.java,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -d -r1.15 -r1.16 *** TestNodeSerializer.java 9 Feb 2007 16:13:18 -0000 1.15 --- TestNodeSerializer.java 12 Mar 2007 18:06:12 -0000 1.16 *************** *** 526,530 **** */ ! if(nodeSer.recordCompressor==null ) { final int checksum = buf.getInt(NodeSerializer.OFFSET_CHECKSUM); --- 526,530 ---- */ ! if(nodeSer.useChecksum && nodeSer.recordCompressor==null ) { final int checksum = buf.getInt(NodeSerializer.OFFSET_CHECKSUM); |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:16
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/test/com/bigdata/journal Modified Files: TestCommitHistory.java StressTestConcurrent.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: TestCommitHistory.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TestCommitHistory.java 22 Feb 2007 16:59:34 -0000 1.2 --- TestCommitHistory.java 12 Mar 2007 18:06:12 -0000 1.3 *************** *** 49,57 **** import java.nio.ByteBuffer; ! import java.util.Properties; /** ! * Test the ability to get (exact match) and find (most recent less than ! * or equal to) historical commit records in a {@link Journal}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 49,60 ---- import java.nio.ByteBuffer; ! ! import com.bigdata.objndx.BTree; /** ! * Test the ability to get (exact match) and find (most recent less than or ! * equal to) historical commit records in a {@link Journal}. Also verifies that ! * a canonicalizing cache is maintained (you never obtain distinct concurrent ! * instances of the same commit record). * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> *************** *** 163,170 **** public void test_commitRecordIndex_restartSafe() { - // Properties properties = getProperties(); - // - // properties.setProperty(Options.DELETE_ON_CLOSE,"false"); - Journal journal = new Journal(getProperties()); --- 166,169 ---- *************** *** 229,236 **** public void test_commitRecordIndex_find() { - // Properties properties = getProperties(); - - // properties.setProperty(Options.DELETE_ON_CLOSE,"false"); - Journal journal = new Journal(getProperties()); --- 228,231 ---- *************** *** 347,350 **** --- 342,518 ---- } + + /** + * Test verifies that exact match and find always return the same reference + * for the same commit record (at least as long as the test holds a hard + * reference to the commit record of interest). + */ + public void test_canonicalizingCache() { + + Journal journal = new Journal(getProperties()); + + /* + * The first commit flushes the root leaves of some indices so we get + * back a non-zero commit timestamp. + */ + final long commitTime0 = journal.commit(); + + assertTrue(commitTime0 != 0L); + + /* + * obtain the commit record for that commit timestamp. + */ + ICommitRecord commitRecord0 = journal.getCommitRecord(commitTime0); + + // should be the same instance that is held by the journal. + assertTrue(commitRecord0 == journal.getCommitRecord()); + + /* + * write a record on the store, commit the store, and note the commit + * time. + */ + journal.write(ByteBuffer.wrap(new byte[]{1,2,3})); + + final long commitTime1 = journal.commit(); + + assertTrue(commitTime1!=0L); + + /* + * obtain the commit record for that commit timestamp. + */ + ICommitRecord commitRecord1 = journal.getCommitRecord(commitTime1); + + // should be the same instance that is held by the journal. + assertTrue(commitRecord1 == journal.getCommitRecord()); + + /* + * verify that we obtain the same instance with find as with an exact + * match. + */ + + assertTrue(commitRecord0 == journal.getCommitRecord(commitTime1 - 1)); + + assertTrue(commitRecord1 == journal.getCommitRecord(commitTime1 + 0 )); + + assertTrue(commitRecord1 == journal.getCommitRecord(commitTime1 + 1)); + + journal.closeAndDelete(); + + } + + /** + * Test of the canonicalizing object cache used to prevent distinct + * instances of a historical index from being created. The test also + * verifies that the historical named index is NOT the same instance as the + * current unisolated index by that name. + */ + public void test_objectCache() { + + Journal journal = new Journal(getProperties()); + + final String name = "abc"; + + final BTree liveIndex = (BTree) journal.registerIndex(name); + + final long commitTime0 = journal.commit(); + + assertTrue(commitTime0 != 0L); + + /* + * obtain the commit record for that commit timestamp. + */ + ICommitRecord commitRecord0 = journal.getCommitRecord(commitTime0); + + // should be the same instance that is held by the journal. + assertTrue(commitRecord0 == journal.getCommitRecord()); + + /* + * verify that a request for last committed state the named index + * returns a different instance than the "live" index. + */ + + final BTree historicalIndex0 = (BTree)journal.getIndex(name, commitRecord0); + + assertTrue(liveIndex != historicalIndex0); + + // re-request is still the same object. + assertTrue(historicalIndex0 == (BTree) journal.getIndex(name, + commitRecord0)); + + /* + * The re-load address for the live index as of that commit record. + */ + final long liveIndexAddr0 = liveIndex.getMetadata().getMetadataAddr(); + + /* + * write a record on the store, commit the store, and note the commit + * time. + */ + journal.write(ByteBuffer.wrap(new byte[]{1,2,3})); + + final long commitTime1 = journal.commit(); + + assertTrue(commitTime1!=0L); + + /* + * we did NOT write on the named index, so its address in the store must + * not change. + */ + assertEquals(liveIndexAddr0,liveIndex.getMetadata().getMetadataAddr()); + + // obtain the commit record for that commit timestamp. + ICommitRecord commitRecord1 = journal.getCommitRecord(commitTime1); + + // should be the same instance that is held by the journal. + assertTrue(commitRecord1 == journal.getCommitRecord()); + + /* + * verify that we get the same historical index object for the new + * commit record since the index state was not changed and it will be + * reloaded from the same address. + */ + assertTrue(historicalIndex0 == (BTree) journal.getIndex(name, + commitRecord1)); + + // re-request is still the same object. + assertTrue(historicalIndex0 == (BTree) journal.getIndex(name, + commitRecord0)); + + // re-request is still the same object. + assertTrue(historicalIndex0 == (BTree) journal.getIndex(name, + commitRecord1)); + + /* + * Now write on the live index and commit. verify that there is a new + * historical index available for the new commit record, that it is not + * the same as the live index, and that it is not the same as the + * previous historical index (which should still be accessible). + */ + + // live index is the same reference. + assertTrue(liveIndex == journal.getIndex(name)); + + liveIndex.insert(new byte[]{1,2}, new byte[]{1,2}); + + final long commitTime2 = journal.commit(); + + // obtain the commit record for that commit timestamp. + ICommitRecord commitRecord2 = journal.getCommitRecord(commitTime2); + + // should be the same instance that is held by the journal. + assertTrue(commitRecord2 == journal.getCommitRecord()); + + // must be a different index object. + + BTree historicalIndex2 = (BTree) journal.getIndex(name, commitRecord2); + + assertTrue(historicalIndex0 != historicalIndex2); + + // the live index must be distinct from the historical index. + assertTrue(liveIndex != historicalIndex2); + + journal.closeAndDelete(); + + } } Index: StressTestConcurrent.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/StressTestConcurrent.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** StressTestConcurrent.java 6 Mar 2007 20:38:06 -0000 1.7 --- StressTestConcurrent.java 12 Mar 2007 18:06:12 -0000 1.8 *************** *** 380,388 **** Properties properties = new Properties(); ! // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); properties.setProperty(Options.FORCE_ON_COMMIT,ForceEnum.No.toString()); ! properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); --- 380,388 ---- Properties properties = new Properties(); ! properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); properties.setProperty(Options.FORCE_ON_COMMIT,ForceEnum.No.toString()); ! // properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:16
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/java/com/bigdata/objndx Modified Files: AbstractBTree.java BTree.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: AbstractBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/AbstractBTree.java,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -d -r1.18 -r1.19 *** AbstractBTree.java 8 Mar 2007 18:14:05 -0000 1.18 --- AbstractBTree.java 12 Mar 2007 18:06:12 -0000 1.19 *************** *** 840,844 **** * </p> * <p> ! * This method guarentees that the specified node will NOT be synchronously * persisted as a side effect and thereby made immutable. (Of course, the * node may be already immutable.) --- 840,844 ---- * </p> * <p> ! * This method guarantees that the specified node will NOT be synchronously * persisted as a side effect and thereby made immutable. (Of course, the * node may be already immutable.) Index: BTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/BTree.java,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -d -r1.37 -r1.38 *** BTree.java 11 Mar 2007 11:41:45 -0000 1.37 --- BTree.java 12 Mar 2007 18:06:12 -0000 1.38 *************** *** 487,491 **** /** ! * Reload a btree using a default hard reference queue configuration. * * @param store --- 487,491 ---- /** ! * Load from the store (required de-serialization constructor). * * @param store |
From: Bryan T. <tho...@us...> - 2007-03-12 18:06:16
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv21164/src/test/com/bigdata/isolation Modified Files: TestIsolatedBTree.java Log Message: Working on canonicalizing mappings to improve resource utilization. Index: TestIsolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/isolation/TestIsolatedBTree.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestIsolatedBTree.java 11 Mar 2007 11:42:49 -0000 1.3 --- TestIsolatedBTree.java 12 Mar 2007 18:06:12 -0000 1.4 *************** *** 135,139 **** final long addr = btree.write(); ! btree = (IsolatedBTree)BTree.load(store,addr); assertTrue(store==btree.getStore()); --- 135,141 ---- final long addr = btree.write(); ! // special constructor is required. ! btree = (IsolatedBTree) new IsolatedBTree(store, BTreeMetadata ! .read(store, addr), src); assertTrue(store==btree.getStore()); *************** *** 155,159 **** final long addr = btree.write(); ! btree = (IsolatedBTree)BTree.load(store,addr); assertTrue(store == btree.getStore()); --- 157,163 ---- final long addr = btree.write(); ! // special constructor is required. ! btree = (IsolatedBTree) new IsolatedBTree(store, BTreeMetadata ! .read(store, addr), src); assertTrue(store == btree.getStore()); |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:39
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/rio In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv6099/src/test/com/bigdata/rdf/rio Modified Files: TestRioIntegration.java Log Message: Continued minor refactoring in line with model updates. Index: TestRioIntegration.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRioIntegration.java,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -d -r1.16 -r1.17 *** TestRioIntegration.java 6 Mar 2007 20:38:13 -0000 1.16 --- TestRioIntegration.java 11 Mar 2007 11:43:33 -0000 1.17 *************** *** 54,58 **** import com.bigdata.rdf.AbstractTripleStoreTestCase; import com.bigdata.rdf.TripleStore; ! import com.bigdata.scaleup.PartitionedJournal.Options; /** --- 54,58 ---- import com.bigdata.rdf.AbstractTripleStoreTestCase; import com.bigdata.rdf.TripleStore; ! import com.bigdata.scaleup.MasterJournal.Options; /** |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:39
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv6099/src/java/com/bigdata/rdf Modified Files: TripleStore.java Log Message: Continued minor refactoring in line with model updates. Index: TripleStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/TripleStore.java,v retrieving revision 1.21 retrieving revision 1.22 diff -C2 -d -r1.21 -r1.22 *** TripleStore.java 6 Mar 2007 20:38:13 -0000 1.21 --- TripleStore.java 11 Mar 2007 11:43:32 -0000 1.22 *************** *** 89,93 **** import com.bigdata.rdf.serializers.StatementSerializer; import com.bigdata.rdf.serializers.TermIdSerializer; ! import com.bigdata.scaleup.PartitionedIndex; import com.bigdata.scaleup.SlaveJournal; import com.ibm.icu.text.Collator; --- 89,93 ---- import com.bigdata.rdf.serializers.StatementSerializer; import com.bigdata.rdf.serializers.TermIdSerializer; ! import com.bigdata.scaleup.PartitionedIndexView; import com.bigdata.scaleup.SlaveJournal; import com.ibm.icu.text.Collator; *************** *** 140,144 **** * @todo Refactor to use a delegation mechanism so that we can run with or * without partitioned indices? (All you have to do now is change the ! * class that is being extended from Journal to PartitionedJournal and * handle some different initialization properties.) * --- 140,144 ---- * @todo Refactor to use a delegation mechanism so that we can run with or * without partitioned indices? (All you have to do now is change the ! * class that is being extended from Journal to MasterJournal and * handle some different initialization properties.) * *************** *** 397,402 **** * * FIXME setupCommitters and discardCommitters are not invoked when we are ! * using a {@link PartitionedIndex}. I need to refactor the apis so that ! * these methods do not appear on {@link PartitionedIndex} or that they are * correctly invoked by the {@link SlaveJournal}. I expect that they are * not required on {@link IJournal} but only on {@link Journal}. --- 397,402 ---- * * FIXME setupCommitters and discardCommitters are not invoked when we are ! * using a {@link PartitionedIndexView}. I need to refactor the apis so that ! * these methods do not appear on {@link PartitionedIndexView} or that they are * correctly invoked by the {@link SlaveJournal}. I expect that they are * not required on {@link IJournal} but only on {@link Journal}. *************** *** 410,414 **** /* * This is here as a work around so that this counter gets initialized ! * whether or not we are extending Journal vs PartitionedJournal. */ getCounter(); --- 410,414 ---- /* * This is here as a work around so that this counter gets initialized ! * whether or not we are extending Journal vs MasterJournal. */ getCounter(); |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:39
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv6099/src/java/com/bigdata/rdf/rio Modified Files: BulkRioLoader.java Log Message: Continued minor refactoring in line with model updates. Index: BulkRioLoader.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/rio/BulkRioLoader.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** BulkRioLoader.java 8 Mar 2007 18:14:40 -0000 1.11 --- BulkRioLoader.java 11 Mar 2007 11:43:32 -0000 1.12 *************** *** 70,74 **** import com.bigdata.rdf.serializers.StatementSerializer; import com.bigdata.rdf.serializers.TermIdSerializer; ! import com.bigdata.scaleup.PartitionedJournal; /** --- 70,74 ---- import com.bigdata.rdf.serializers.StatementSerializer; import com.bigdata.rdf.serializers.TermIdSerializer; ! import com.bigdata.scaleup.MasterJournal; /** *************** *** 354,358 **** * getTermIndices:Iterator<IndexSegment>?) * ! * @deprecated Replace this with the use of a {@link PartitionedJournal}. */ public static class Indices { --- 354,358 ---- * getTermIndices:Iterator<IndexSegment>?) * ! * @deprecated Replace this with the use of a {@link MasterJournal}. */ public static class Indices { |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:39
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv6099/src/test/com/bigdata/rdf Modified Files: AbstractTripleStoreTestCase.java TestRestartSafe.java Log Message: Continued minor refactoring in line with model updates. Index: TestRestartSafe.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/TestRestartSafe.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** TestRestartSafe.java 22 Feb 2007 16:58:59 -0000 1.8 --- TestRestartSafe.java 11 Mar 2007 11:43:32 -0000 1.9 *************** *** 58,62 **** import com.bigdata.rdf.model.OptimizedValueFactory._Literal; import com.bigdata.rdf.model.OptimizedValueFactory._URI; ! import com.bigdata.scaleup.PartitionedJournal.Options; /** --- 58,62 ---- import com.bigdata.rdf.model.OptimizedValueFactory._Literal; import com.bigdata.rdf.model.OptimizedValueFactory._URI; ! import com.bigdata.scaleup.MasterJournal.Options; /** Index: AbstractTripleStoreTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/AbstractTripleStoreTestCase.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** AbstractTripleStoreTestCase.java 22 Feb 2007 16:58:59 -0000 1.11 --- AbstractTripleStoreTestCase.java 11 Mar 2007 11:43:31 -0000 1.12 *************** *** 55,59 **** import com.bigdata.journal.BufferMode; import com.bigdata.journal.ForceEnum; ! import com.bigdata.scaleup.PartitionedJournal.Options; import com.bigdata.rawstore.Bytes; --- 55,59 ---- import com.bigdata.journal.BufferMode; import com.bigdata.journal.ForceEnum; ! import com.bigdata.scaleup.MasterJournal.Options; import com.bigdata.rawstore.Bytes; |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:37
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv6099/src/test/com/bigdata/rdf/inf Modified Files: AbstractInferenceEngineTestCase.java Log Message: Continued minor refactoring in line with model updates. Index: AbstractInferenceEngineTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf/AbstractInferenceEngineTestCase.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** AbstractInferenceEngineTestCase.java 22 Feb 2007 16:58:58 -0000 1.6 --- AbstractInferenceEngineTestCase.java 11 Mar 2007 11:43:32 -0000 1.7 *************** *** 55,59 **** import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; ! import com.bigdata.scaleup.PartitionedJournal.Options; /** --- 55,59 ---- import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; ! import com.bigdata.scaleup.MasterJournal.Options; /** |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:29
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5433/src/java/com/bigdata/journal Modified Files: Tx.java ITx.java IRootBlockView.java AbstractTx.java ITransactionManager.java IJournal.java Name2Addr.java IBufferStrategy.java ICommitter.java Journal.java IIndexManager.java Added Files: IIndexStore.java Removed Files: IStore.java Log Message: Continued minor refactoring in line with model updates. Index: IJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** IJournal.java 8 Mar 2007 18:14:06 -0000 1.8 --- IJournal.java 11 Mar 2007 11:42:45 -0000 1.9 *************** *** 94,101 **** /** ! * Return the named index isolated by the transaction having the ! * specified start time. * ! * @param startTime The transaction start time. */ public IIndex getIndex(String name, long startTime); --- 94,113 ---- /** ! * Return the named index as isolated by the transaction having the ! * specified transaction start time. * ! * @param name ! * The index name. ! * @param startTime ! * The transaction start time, which serves as the unique ! * identifier for the transaction. ! * ! * @return The isolated index. ! * ! * @exception IllegalArgumentException ! * if <i>name</i> is <code>null</code> ! * ! * @exception IllegalStateException ! * if there is no active transaction with that timestamp. */ public IIndex getIndex(String name, long startTime); Index: AbstractTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractTx.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** AbstractTx.java 8 Mar 2007 18:14:07 -0000 1.2 --- AbstractTx.java 11 Mar 2007 11:42:45 -0000 1.3 *************** *** 74,81 **** * The start startTime assigned to this transaction. * <p> ! * Note: Transaction {@link #startTime} and {@link #commitTime}s ! * are assigned by a global time service. The time service must provide ! * unique times for transaction start and commit timestamps and for commit ! * times for unisolated {@link Journal#commit()}s. */ final protected long startTime; --- 74,81 ---- * The start startTime assigned to this transaction. * <p> ! * Note: Transaction {@link #startTime} and {@link #commitTime}s are ! * assigned by a global time service. The time service must provide unique ! * times for transaction start and commit timestamps and commit times for ! * unisolated {@link Journal#commit()}s. */ final protected long startTime; Index: Name2Addr.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Name2Addr.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** Name2Addr.java 21 Feb 2007 20:17:21 -0000 1.4 --- Name2Addr.java 11 Mar 2007 11:42:45 -0000 1.5 *************** *** 44,49 **** * expensive operation of encoding unicode names to byte[] keys. * ! * @todo use a weak value cache so that unused indices may be swept by the ! * GC. */ private Map<String,IIndex> name2BTree = new HashMap<String,IIndex>(); --- 44,56 ---- * expensive operation of encoding unicode names to byte[] keys. * ! * FIXME This is the place to solve the resource (RAM) burden for indices is ! * Name2Addr. Currently, indices are never closed once opened which is a ! * resource leak. We need to close them out eventually based on LRU plus ! * timeout plus NOT IN USE. The way to approach this is a weak reference ! * cache combined with an LRU or hard reference queue that tracks reference ! * counters (just like the BTree hard reference cache for leaves). Eviction ! * events lead to closing an index iff the reference counter is zero. ! * Touches keep recently used indices from closing even though they may have ! * a zero reference count. */ private Map<String,IIndex> name2BTree = new HashMap<String,IIndex>(); *************** *** 148,152 **** // re-load btree from the store. ! btree = BTreeMetadata.load(this.store, entry.addr); // save name -> btree mapping in transient cache. --- 155,159 ---- // re-load btree from the store. ! btree = BTree.load(this.store, entry.addr); // save name -> btree mapping in transient cache. Index: IIndexManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IIndexManager.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** IIndexManager.java 8 Mar 2007 18:14:07 -0000 1.2 --- IIndexManager.java 11 Mar 2007 11:42:46 -0000 1.3 *************** *** 57,61 **** * @version $Id$ */ ! public interface IIndexManager extends IStore { /** --- 57,61 ---- * @version $Id$ */ ! public interface IIndexManager extends IIndexStore { /** Index: ITransactionManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITransactionManager.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** ITransactionManager.java 28 Feb 2007 13:59:10 -0000 1.3 --- ITransactionManager.java 11 Mar 2007 11:42:45 -0000 1.4 *************** *** 131,154 **** // */ // public long newReadCommittedTx(); ! ! /** ! * Return the named index as isolated by the transaction. ! * ! * @param name ! * The index name. ! * @param startTime ! * The transaction start time, which serves as the unique ! * identifier for the transaction. ! * ! * @return The isolated index. ! * ! * @exception IllegalArgumentException ! * if <i>name</i> is <code>null</code> ! * ! * @exception IllegalStateException ! * if there is no active transaction with that timestamp. ! */ ! public IIndex getIndex(String name, long startTime); ! /** * Abort the transaction. --- 131,135 ---- // */ // public long newReadCommittedTx(); ! /** * Abort the transaction. Index: ICommitter.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ICommitter.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ICommitter.java 5 Feb 2007 18:17:40 -0000 1.1 --- ICommitter.java 11 Mar 2007 11:42:46 -0000 1.2 *************** *** 1,32 **** package com.bigdata.journal; import com.bigdata.rawstore.Addr; - import com.bigdata.rawstore.IRawStore; - /** ! * An interface implemented by a persistence capable data structure such as ! * a btree so that it can participate in the commit protocol for the store. * <p> ! * This interface is invoked by {@link #commit()} for each registered ! * committer. The {@link Addr} returned by {@link #commit()} will be saved ! * on the root block in the slot identified to the committer when it ! * registered itself. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * ! * @see IRawStore#registerCommitter(int, ! * com.bigdata.objndx.IRawStore.ICommitter) */ public interface ICommitter { /** ! * Flush all dirty records to disk in preparation for an atomic commit. * ! * @return The {@link Addr address} of the record from which the ! * persistence capable data structure can load itself. If no ! * changes have been made then the previous address should be ! * returned as it is still valid. */ public long handleCommit(); --- 1,74 ---- + /** + + The Notice below must appear in each file of the Source Code of any + copy you distribute of the Licensed Product. Contributors to any + Modifications may add their own copyright notices to identify their + own contributions. + + License: + + The contents of this file are subject to the CognitiveWeb Open Source + License Version 1.1 (the License). You may not copy or use this file, + in either source code or executable form, except in compliance with + the License. You may obtain a copy of the License from + + http://www.CognitiveWeb.org/legal/license/ + + Software distributed under the License is distributed on an AS IS + basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See + the License for the specific language governing rights and limitations + under the License. + + Copyrights: + + Portions created by or assigned to CognitiveWeb are Copyright + (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact + information for CognitiveWeb is available at + + http://www.CognitiveWeb.org + + Portions Copyright (c) 2002-2003 Bryan Thompson. + + Acknowledgements: + + Special thanks to the developers of the Jabber Open Source License 1.0 + (JOSL), from which this License was derived. This License contains + terms that differ from JOSL. + + Special thanks to the CognitiveWeb Open Source Contributors for their + suggestions and support of the Cognitive Web. + + Modifications: + + */ package com.bigdata.journal; import com.bigdata.rawstore.Addr; /** ! * An interface implemented by a persistence capable data structure such as a ! * btree so that it can participate in the commit protocol for the store. * <p> ! * This interface is invoked by {@link Journal#commit()} for each registered ! * {@link ICommitter}. The {@link Addr} returned by {@link #handleCommit()} ! * will be saved in the {@link ICommitRecord} under the index identified by the ! * {@link ICommitter} when it was registered. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * ! * @see IAtomicStore#setCommitter(int, ICommitter) */ public interface ICommitter { /** ! * Flush dirty state to the store in preparation for an atomic commit and ! * return the {@link Addr address} from which the persistence capable data ! * structure may be reloaded. * ! * @return The {@link Addr address} of the record from which the persistence ! * capable data structure may be reloaded. If no changes have been ! * made then the previous address should be returned as it is still ! * valid. */ public long handleCommit(); Index: Tx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Tx.java,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -d -r1.34 -r1.35 *** Tx.java 28 Feb 2007 13:59:09 -0000 1.34 --- Tx.java 11 Mar 2007 11:42:44 -0000 1.35 *************** *** 61,65 **** import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MetadataIndex; ! import com.bigdata.scaleup.PartitionedIndex; /** --- 61,65 ---- import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MetadataIndex; ! import com.bigdata.scaleup.PartitionedIndexView; /** *************** *** 101,122 **** * * @todo Support transactions where the indices isolated by the transactions are ! * {@link PartitionedIndex}es. ! * ! * @todo Track which {@link IndexSegment}s and {@link Journal}s are required ! * to support the {@link IsolatedBTree}s in use by a {@link Tx}. Deletes ! * of old journals and index segments MUST be deferred until no ! * transaction remains which can read those data. This metadata must be ! * restart-safe so that resources are eventually deleted. On restart, ! * active transactions will have been discarded abort and their resources ! * released. (Do we need a restart-safe means to indicate the set of ! * running transactions?)<br> ! * There is also a requirement for quickly locating the specific journal ! * and index segments required to support isolation of an index. This ! * probably means an index into the history of the {@link MetadataIndex} ! * (so we don't throw it away until no transactions can reach back that ! * far) as well as an index into the named indices index -- perhaps simply ! * an index by startTime into the root addresses (or whole root block ! * views, or moving the root addresses out of the root block and into the ! * store with only the address of the root addresses in the root block). * * @todo The various public methods on this API that have {@link RunState} --- 101,105 ---- * * @todo Support transactions where the indices isolated by the transactions are ! * {@link PartitionedIndexView}es. * * @todo The various public methods on this API that have {@link RunState} *************** *** 129,133 **** * server failed to test the pre-conditions and they were not met */ ! public class Tx extends AbstractTx implements IStore, ITx { /** --- 112,116 ---- * server failed to test the pre-conditions and they were not met */ ! public class Tx extends AbstractTx implements IIndexStore, ITx { /** Index: Journal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Journal.java,v retrieving revision 1.59 retrieving revision 1.60 diff -C2 -d -r1.59 -r1.60 *** Journal.java 8 Mar 2007 18:14:07 -0000 1.59 --- Journal.java 11 Mar 2007 11:42:46 -0000 1.60 *************** *** 64,73 **** import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BTreeMetadata; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; ! import com.bigdata.scaleup.PartitionedJournal.Options; import com.bigdata.util.concurrent.DaemonThreadFactory; --- 64,72 ---- import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.BTree; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; ! import com.bigdata.scaleup.MasterJournal.Options; import com.bigdata.util.concurrent.DaemonThreadFactory; *************** *** 108,135 **** * FIXME Priority items are: * <ol> - * <li> Transaction isolation (correctness tests, isolatedbtree tests, fused - * view tests).</li> * <li> Concurrent load for RDFS w/o rollback.</li> * <li> Group commit for higher transaction throughput.<br> ! * Note: TPS is basically constant for a given combination of the buffer mode ! * and whether or not commits are forced to disk. This means that the #of ! * clients is not a strong influence on performance. The big wins are Transient ! * and Force := No since neither conditions synchs to disk. This suggests that ! * the big win for TPS throughput is going to be group commit. </li> ! * <li> Scale out database (automatic re-partitioning of indices and processing * of deletion markers).</li> * <li> AIO for the Direct and Disk modes.</li> ! * <li> Distributed database protocols.</li> ! * <li> Segment server (mixture of journal server and read-optimized database * server).</li> ! * <li> Testing of an "embedded database" using both a journal only and a ! * journal + read-optimized database design. This can be tested up to the GPO ! * layer.</li> ! * <li>Support primary key (clustered) indices in GPO/PO layer.</li> ! * <li>Implement backward validation and state-based conflict resolution with ! * custom merge rules for RDFS, persistent objects, generic objects, and primary ! * key indices, and secondary indexes.</li> ! * <li> Architecture using queues from GOM to journal/database segment server ! * supporting both embedded and remote scenarios.</li> * </ol> * --- 107,135 ---- * FIXME Priority items are: * <ol> * <li> Concurrent load for RDFS w/o rollback.</li> * <li> Group commit for higher transaction throughput.<br> ! * Note: For short transactions, TPS is basically constant for a given ! * combination of the buffer mode and whether or not commits are forced to disk. ! * This means that the #of clients is not a strong influence on performance. The ! * big wins are Transient and Force := No since neither conditions synchs to ! * disk. This suggests that the big win for TPS throughput is going to be group ! * commit. </li> ! * <li> Scale-up database (automatic re-partitioning of indices and processing * of deletion markers).</li> * <li> AIO for the Direct and Disk modes.</li> ! * <li> GOM integration, including: support for primary key (clustered) indices; ! * using queues from GOM to journal/database segment server supporting both ! * embedded and remote scenarios; and using state-based conflict resolution to ! * obtain high concurrency for generic objects, link set metadata, and indices.</li> ! * <li> Scale-out database, including: ! * <ul> ! * <li> Data server (mixture of journal server and read-optimized database * server).</li> ! * <li> Transaction service (low-latency with failover).</li> ! * <li> Metadata index services (one per named index with failover).</li> ! * <li> Resource reclaimation. </li> ! * <li> Job scheduler to map functional programs across the data (possible ! * Hadoop integration point).</li> ! * </ul> * </ol> * *************** *** 549,557 **** val = File.createTempFile("bigdata-" + bufferMode + "-", ".jnl", tmpDir).toString(); - - // // the file that gets opened. - // properties.setProperty(Options.FILE, val); - // // turn off this property to facilitate re-open of the same file. - // properties.setProperty(Options.CREATE_TEMP_FILE,"false"); } catch(IOException ex) { --- 549,552 ---- *************** *** 935,942 **** * that method is the address from which the {@link ICommitter} may be * reloaded (and its previous address if its state has not changed). That ! * address is saved in the slot of the root block under which that committer ! * was {@link #registerCommitter(int, ICommitter) registered}. We then ! * force the data to stable store, update the root block, and force the root ! * block and the file metadata to stable store. */ public long commit() { --- 930,937 ---- * that method is the address from which the {@link ICommitter} may be * reloaded (and its previous address if its state has not changed). That ! * address is saved in the {@link ICommitRecord} under the index for which ! * that committer was {@link #registerCommitter(int, ICommitter) registered}. ! * We then force the data to stable store, update the root block, and force ! * the root block and the file metadata to stable store. */ public long commit() { *************** *** 947,953 **** /** ! * Handle the {@link #commit()} and integrations with transaction support so ! * that we can update the first and last transaction identifiers on the root ! * block as necessary. * * @param tx --- 942,948 ---- /** ! * Handles the {@link #commit()} and integrations with transaction support ! * so that we can update the first and last transaction identifiers on the ! * root block as necessary. * * @param tx *************** *** 1233,1237 **** */ ! name2Addr = (Name2Addr) BTreeMetadata.load(this, addr); } --- 1228,1232 ---- */ ! name2Addr = (Name2Addr) BTree.load(this, addr); } *************** *** 1276,1280 **** */ ! ndx = (CommitRecordIndex) BTreeMetadata.load(this, addr); } --- 1271,1275 ---- */ ! ndx = (CommitRecordIndex) BTree.load(this, addr); } *************** *** 1367,1371 **** */ ! return ((Name2Addr) BTreeMetadata.load(this, commitRecord .getRootAddr(ROOT_NAME2ADDR))).get(name); --- 1362,1366 ---- */ ! return ((Name2Addr) BTree.load(this, commitRecord .getRootAddr(ROOT_NAME2ADDR))).get(name); Index: ITx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITx.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** ITx.java 28 Feb 2007 13:59:10 -0000 1.5 --- ITx.java 11 Mar 2007 11:42:44 -0000 1.6 *************** *** 57,61 **** * @version $Id$ */ ! public interface ITx extends IStore { /** --- 57,61 ---- * @version $Id$ */ ! public interface ITx extends IIndexStore { /** Index: IRootBlockView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IRootBlockView.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** IRootBlockView.java 28 Feb 2007 13:59:10 -0000 1.9 --- IRootBlockView.java 11 Mar 2007 11:42:45 -0000 1.10 *************** *** 51,55 **** import com.bigdata.rawstore.Addr; ! import com.bigdata.scaleup.PartitionedJournal; /** --- 51,55 ---- import com.bigdata.rawstore.Addr; ! import com.bigdata.scaleup.MasterJournal; /** *************** *** 171,175 **** * a featured used to support transactional isolation. * <p> ! * Note: When using a {@link PartitionedJournal} the {@link Addr address} of * the {@link ICommitRecord} MAY refer to a historical {@link SlaveJournal} * and care MUST be exercised to resolve the address against the appropriate --- 171,175 ---- * a featured used to support transactional isolation. * <p> ! * Note: When using a {@link MasterJournal} the {@link Addr address} of * the {@link ICommitRecord} MAY refer to a historical {@link SlaveJournal} * and care MUST be exercised to resolve the address against the appropriate --- IStore.java DELETED --- Index: IBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IBufferStrategy.java,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -d -r1.15 -r1.16 *** IBufferStrategy.java 21 Feb 2007 20:17:21 -0000 1.15 --- IBufferStrategy.java 11 Mar 2007 11:42:45 -0000 1.16 *************** *** 87,94 **** * and whether or not the file metadata for the journal is forced * to stable storage. See {@link Options#FORCE_ON_COMMIT}. - * - * @todo this is basically an atomic commit. Reconcile it with - * {@link IAtomicStore#commit()}. Note that the latter works with - * registered committers, while this does not. */ public void writeRootBlock(IRootBlockView rootBlock, --- 87,90 ---- --- NEW FILE: IIndexStore.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Oct 25, 2006 */ package com.bigdata.journal; import com.bigdata.objndx.IIndex; /** * Interface for reading and writing persistent data using one or more named * indices. Persistent data are stored as ordered key-value tuples in indices. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public interface IIndexStore { /** * Return a named index. * * @param name * The index name. * * @return The named index or <code>null</code> if no index was registered * under that name. * * @see IJournal#registerIndex(String, IIndex) */ public IIndex getIndex(String name); } |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:24
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5433/src/test/com/bigdata/scaleup Modified Files: TestPartitionedIndex.java TestMetadataIndex.java TestPartitionedJournal.java Log Message: Continued minor refactoring in line with model updates. Index: TestMetadataIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestMetadataIndex.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestMetadataIndex.java 8 Mar 2007 18:14:06 -0000 1.9 --- TestMetadataIndex.java 11 Mar 2007 11:42:48 -0000 1.10 *************** *** 61,67 **** import com.bigdata.objndx.AbstractBTreeTestCase; import com.bigdata.objndx.BTree; - import com.bigdata.objndx.BTreeMetadata; import com.bigdata.objndx.BatchInsert; ! import com.bigdata.objndx.FusedView; import com.bigdata.objndx.IndexSegment; import com.bigdata.objndx.IndexSegmentBuilder; --- 61,66 ---- import com.bigdata.objndx.AbstractBTreeTestCase; import com.bigdata.objndx.BTree; import com.bigdata.objndx.BatchInsert; ! import com.bigdata.objndx.ReadOnlyFusedView; import com.bigdata.objndx.IndexSegment; import com.bigdata.objndx.IndexSegmentBuilder; *************** *** 268,272 **** // re-load the index. ! md = (MetadataIndex)BTreeMetadata.load(store, addr); assertEquals("name","abc",md.getName()); --- 267,271 ---- // re-load the index. ! md = (MetadataIndex)BTree.load(store, addr); assertEquals("name","abc",md.getName()); *************** *** 468,472 **** * verify the fused view is the same as the data already on the btree. */ ! FusedView view = new FusedView(new AbstractBTree[]{btree,seg}); assertSameIterator(new Object[] { v1, v2, v3, v4, v5, v6, v7, v8 }, --- 467,471 ---- * verify the fused view is the same as the data already on the btree. */ ! ReadOnlyFusedView view = new ReadOnlyFusedView(new AbstractBTree[]{btree,seg}); assertSameIterator(new Object[] { v1, v2, v3, v4, v5, v6, v7, v8 }, *************** *** 573,577 **** * verify the fused view is the same as the data already on the btree. */ ! assertSameIterator(new Object[] { v1, v3, v5, v7 }, new FusedView( new AbstractBTree[] { btree, seg01 }).rangeIterator(null, null)); --- 572,576 ---- * verify the fused view is the same as the data already on the btree. */ ! assertSameIterator(new Object[] { v1, v3, v5, v7 }, new ReadOnlyFusedView( new AbstractBTree[] { btree, seg01 }).rangeIterator(null, null)); *************** *** 1041,1045 **** * verify the fused view is the same as the data already on the btree. */ ! assertSameIterator(new Object[] { v1, v3, v5, v7 }, new FusedView( new AbstractBTree[] { btree, seg01 }).rangeIterator(null, null)); --- 1040,1044 ---- * verify the fused view is the same as the data already on the btree. */ ! assertSameIterator(new Object[] { v1, v3, v5, v7 }, new ReadOnlyFusedView( new AbstractBTree[] { btree, seg01 }).rangeIterator(null, null)); *************** *** 1085,1089 **** * verify the fused view is the same as the data already on the btree. */ ! assertSameIterator(new Object[] { v2, v4, v6, v8 }, new FusedView( new AbstractBTree[] { btree, seg02 }).rangeIterator(null, null)); --- 1084,1088 ---- * verify the fused view is the same as the data already on the btree. */ ! assertSameIterator(new Object[] { v2, v4, v6, v8 }, new ReadOnlyFusedView( new AbstractBTree[] { btree, seg02 }).rangeIterator(null, null)); *************** *** 1093,1097 **** */ assertSameIterator(new Object[] { v1, v2, v3, v4, v5, v6, v7, v8 }, ! new FusedView(new AbstractBTree[] { seg01, seg02 }) .rangeIterator(null, null)); --- 1092,1096 ---- */ assertSameIterator(new Object[] { v1, v2, v3, v4, v5, v6, v7, v8 }, ! new ReadOnlyFusedView(new AbstractBTree[] { seg01, seg02 }) .rangeIterator(null, null)); Index: TestPartitionedIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestPartitionedIndex.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TestPartitionedIndex.java 8 Mar 2007 18:14:06 -0000 1.2 --- TestPartitionedIndex.java 11 Mar 2007 11:42:47 -0000 1.3 *************** *** 54,58 **** /** ! * Test suite for {@link PartitionedIndex}. * * @todo This should be refactored to test when using {@link BTree} as well as --- 54,58 ---- /** ! * Test suite for {@link PartitionedIndexView}. * * @todo This should be refactored to test when using {@link BTree} as well as *************** *** 61,65 **** * from {@link BTree} to {@link UnisolatedBTree} highlights many assumptions * in {@link IndexSegmentBuilder}, {@link IndexSegmentMerger}, and the ! * code that handles {@link PartitionedJournal#overflow()}. * * @todo where possible, it would be nice to leverage the unit tests for the --- 61,65 ---- * from {@link BTree} to {@link UnisolatedBTree} highlights many assumptions * in {@link IndexSegmentBuilder}, {@link IndexSegmentMerger}, and the ! * code that handles {@link MasterJournal#overflow()}. * * @todo where possible, it would be nice to leverage the unit tests for the Index: TestPartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestPartitionedJournal.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** TestPartitionedJournal.java 8 Mar 2007 18:14:06 -0000 1.5 --- TestPartitionedJournal.java 11 Mar 2007 11:42:48 -0000 1.6 *************** *** 61,66 **** import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.SimpleMemoryRawStore; ! import com.bigdata.scaleup.PartitionedJournal.MergePolicy; ! import com.bigdata.scaleup.PartitionedJournal.Options; /** --- 61,66 ---- import com.bigdata.objndx.KeyBuilder; import com.bigdata.rawstore.SimpleMemoryRawStore; ! import com.bigdata.scaleup.MasterJournal.MergePolicy; ! import com.bigdata.scaleup.MasterJournal.Options; /** *************** *** 139,143 **** properties.setProperty(Options.BASENAME,getName()); ! PartitionedJournal journal = new PartitionedJournal(properties); final String name = "abc"; --- 139,143 ---- properties.setProperty(Options.BASENAME,getName()); ! MasterJournal journal = new MasterJournal(properties); final String name = "abc"; *************** *** 149,155 **** index = journal.registerIndex(name, index); ! assertTrue(journal.getIndex(name) instanceof PartitionedIndex); ! assertEquals("name", name, ((PartitionedIndex) journal.getIndex(name)) .getName()); --- 149,155 ---- index = journal.registerIndex(name, index); ! assertTrue(journal.getIndex(name) instanceof PartitionedIndexView); ! assertEquals("name", name, ((PartitionedIndexView) journal.getIndex(name)) .getName()); *************** *** 175,184 **** * re-open the journal and test restart safety. */ ! journal = new PartitionedJournal(properties); ! index = (PartitionedIndex) journal.getIndex(name); assertNotNull("btree", index); ! assertEquals("entryCount", 1, ((PartitionedIndex)index).getBTree().getEntryCount()); assertEquals(v0, (byte[])index.lookup(k0)); --- 175,184 ---- * re-open the journal and test restart safety. */ ! journal = new MasterJournal(properties); ! index = (PartitionedIndexView) journal.getIndex(name); assertNotNull("btree", index); ! assertEquals("entryCount", 1, ((PartitionedIndexView)index).getBTree().getEntryCount()); assertEquals(v0, (byte[])index.lookup(k0)); *************** *** 205,209 **** properties.setProperty(Options.BASENAME,getName()); ! PartitionedJournal journal = new PartitionedJournal(properties); final String name = "abc"; --- 205,209 ---- properties.setProperty(Options.BASENAME,getName()); ! MasterJournal journal = new MasterJournal(properties); final String name = "abc"; *************** *** 239,243 **** properties.setProperty(Options.BASENAME,getName()); ! PartitionedJournal journal = new PartitionedJournal(properties); final String name = "abc"; --- 239,243 ---- properties.setProperty(Options.BASENAME,getName()); ! MasterJournal journal = new MasterJournal(properties); final String name = "abc"; *************** *** 289,293 **** MergePolicy.CompactingMerge.toString()); ! PartitionedJournal journal = new PartitionedJournal(properties); final String name = "abc"; --- 289,293 ---- MergePolicy.CompactingMerge.toString()); ! MasterJournal journal = new MasterJournal(properties); final String name = "abc"; *************** *** 323,327 **** * Verify that the btree is empty. */ ! assertEquals(0, ((PartitionedIndex) journal.getIndex(name)).getBTree() .getEntryCount()); --- 323,327 ---- * Verify that the btree is empty. */ ! assertEquals(0, ((PartitionedIndexView) journal.getIndex(name)).getBTree() .getEntryCount()); *************** *** 354,358 **** MergePolicy.CompactingMerge.toString()); ! PartitionedJournal journal = new PartitionedJournal(properties); final String name = "abc"; --- 354,358 ---- MergePolicy.CompactingMerge.toString()); ! MasterJournal journal = new MasterJournal(properties); final String name = "abc"; *************** *** 410,414 **** * Verify that the btree is empty. */ ! assertEquals(0, ((PartitionedIndex) journal.getIndex(name)) .getBTree().getEntryCount()); --- 410,414 ---- * Verify that the btree is empty. */ ! assertEquals(0, ((PartitionedIndexView) journal.getIndex(name)) .getBTree().getEntryCount()); |
From: Bryan T. <tho...@us...> - 2007-03-11 11:43:19
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5433/src/java/com/bigdata/isolation Modified Files: IsolatableFusedView.java IsolatedBTree.java IIsolatableIndex.java Removed Files: IsolatablePartitionedIndex.java Log Message: Continued minor refactoring in line with model updates. Index: IsolatableFusedView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IsolatableFusedView.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** IsolatableFusedView.java 8 Mar 2007 18:14:06 -0000 1.3 --- IsolatableFusedView.java 11 Mar 2007 11:42:44 -0000 1.4 *************** *** 49,56 **** import com.bigdata.objndx.AbstractBTree; ! import com.bigdata.objndx.FusedView; /** ! * A {@link FusedView} that understands how to process delete markers. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 49,56 ---- import com.bigdata.objndx.AbstractBTree; ! import com.bigdata.objndx.ReadOnlyFusedView; /** ! * An {@link IFusedView} that understands how to process delete markers. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> *************** *** 59,63 **** * @todo refactor to isolate and override the merge rule. */ ! public class IsolatableFusedView extends FusedView implements IIsolatableIndex { /** --- 59,63 ---- * @todo refactor to isolate and override the merge rule. */ ! public class IsolatableFusedView extends ReadOnlyFusedView implements IIsolatableIndex { /** Index: IIsolatableIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IIsolatableIndex.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** IIsolatableIndex.java 8 Mar 2007 18:14:06 -0000 1.2 --- IIsolatableIndex.java 11 Mar 2007 11:42:44 -0000 1.3 *************** *** 58,61 **** --- 58,62 ---- import com.bigdata.objndx.IndexSegmentMerger; import com.bigdata.objndx.Leaf; + import com.bigdata.scaleup.IsolatablePartitionedIndexView; /** *************** *** 138,142 **** * btree on the journal. the UnisolatedBTree is different in that it reads * against a view and writes on the UnisolatedBTree. In fact, this is ! * precisely how the PartitionedIndex works already. All that we have to * do is to construct the view from the historical committed state from * which the transaction emerges. In order to do this, we need to be able --- 139,143 ---- * btree on the journal. the UnisolatedBTree is different in that it reads * against a view and writes on the UnisolatedBTree. In fact, this is ! * precisely how the PartitionedIndexView works already. All that we have to * do is to construct the view from the historical committed state from * which the transaction emerges. In order to do this, we need to be able *************** *** 193,198 **** * well). The place to do this is probably where the btrees are created * using an IsolatedBtree class (extends BTree or perhaps just implements ! * {@link IIndex} so that we can use it in combination with the FusedView ! * to support partitioned indices). FusedView will also need to be * modified to support delete markers and could thrown an exception if the * version counters were out of the expected order (so could the leaf --- 194,199 ---- * well). The place to do this is probably where the btrees are created * using an IsolatedBtree class (extends BTree or perhaps just implements ! * {@link IIndex} so that we can use it in combination with the ReadOnlyFusedView ! * to support partitioned indices). ReadOnlyFusedView will also need to be * modified to support delete markers and could thrown an exception if the * version counters were out of the expected order (so could the leaf *************** *** 208,212 **** * @see UnisolatedBTree * @see IsolatableFusedView ! * @see IsolatablePartitionedIndex * @see IndexSegmentMerger * @see Journal --- 209,213 ---- * @see UnisolatedBTree * @see IsolatableFusedView ! * @see IsolatablePartitionedIndexView * @see IndexSegmentMerger * @see Journal --- IsolatablePartitionedIndex.java DELETED --- Index: IsolatedBTree.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation/IsolatedBTree.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** IsolatedBTree.java 8 Mar 2007 18:14:06 -0000 1.7 --- IsolatedBTree.java 11 Mar 2007 11:42:44 -0000 1.8 *************** *** 53,57 **** import com.bigdata.objndx.BTreeMetadata; import com.bigdata.objndx.ByteArrayValueSerializer; - import com.bigdata.objndx.FusedView; import com.bigdata.objndx.IBatchBTree; import com.bigdata.objndx.IEntryIterator; --- 53,56 ---- *************** *** 167,171 **** * a persistence capable parameter in order to reconstruct the view. * Consider whether or not this can be refactored per ! * {@link BTreeMetadata#load(IRawStore, long)}. */ public IsolatedBTree(IRawStore store, BTreeMetadata metadata, UnisolatedBTree src) { --- 166,170 ---- * a persistence capable parameter in order to reconstruct the view. * Consider whether or not this can be refactored per ! * {@link BTree#load(IRawStore, long)}. */ public IsolatedBTree(IRawStore store, BTreeMetadata metadata, UnisolatedBTree src) { *************** *** 335,343 **** public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { ! // FIXME we need a version of FusedView that is aware of delete markers ! // and that applies them such that a key deleted in the write set will ! // not have a value reported from the isolated index. ! return new FusedView(this,src).rangeIterator(fromKey, toKey); } --- 334,344 ---- public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { ! /* ! * This uses a version of ReadOnlyFusedView that is aware of delete markers and ! * that applies them such that a key deleted in the write set will not ! * have a value reported from the isolated index. ! */ ! return new IsolatableFusedView(this,src).rangeIterator(fromKey, toKey); } *************** *** 349,353 **** public IEntryIterator entryIterator() { ! return new FusedView(this,src).rangeIterator(null,null); } --- 350,354 ---- public IEntryIterator entryIterator() { ! return rangeIterator(null,null); } |