You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(414) |
Apr
(123) |
May
(448) |
Jun
(180) |
Jul
(17) |
Aug
(49) |
Sep
(3) |
Oct
(92) |
Nov
(101) |
Dec
(64) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(132) |
Feb
(230) |
Mar
(146) |
Apr
(146) |
May
|
Jun
|
Jul
(34) |
Aug
(4) |
Sep
(3) |
Oct
(10) |
Nov
(12) |
Dec
(24) |
2008 |
Jan
(6) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(11) |
Nov
(4) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Bryan T. <tho...@us...> - 2007-03-06 20:38:10
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv23960/src/java/com/bigdata/journal Modified Files: AbstractBufferStrategy.java TemporaryRawStore.java Journal.java Log Message: Refactoring to introduce asynchronous handling of overflow events in support of a scale-up/scale-out design. Index: TemporaryRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TemporaryRawStore.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TemporaryRawStore.java 22 Feb 2007 16:59:34 -0000 1.2 --- TemporaryRawStore.java 6 Mar 2007 20:38:06 -0000 1.3 *************** *** 195,198 **** --- 195,204 ---- } + public long size() { + + return buf.size(); + + } + public boolean isOpen() { Index: AbstractBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractBufferStrategy.java,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** AbstractBufferStrategy.java 21 Feb 2007 20:17:21 -0000 1.11 --- AbstractBufferStrategy.java 6 Mar 2007 20:38:06 -0000 1.12 *************** *** 104,107 **** --- 104,113 ---- } + + public final long size() { + + return (long)nextOffset; + + } /** Index: Journal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Journal.java,v retrieving revision 1.57 retrieving revision 1.58 diff -C2 -d -r1.57 -r1.58 *** Journal.java 28 Feb 2007 13:59:10 -0000 1.57 --- Journal.java 6 Mar 2007 20:38:06 -0000 1.58 *************** *** 1107,1110 **** --- 1107,1116 ---- } + public long size() { + + return _bufferStrategy.size(); + + } + public long write(ByteBuffer data) { |
From: Bryan T. <tho...@us...> - 2007-03-06 20:38:10
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv23960/src/test/com/bigdata/journal Modified Files: StressTestConcurrent.java Log Message: Refactoring to introduce asynchronous handling of overflow events in support of a scale-up/scale-out design. Index: StressTestConcurrent.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/StressTestConcurrent.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** StressTestConcurrent.java 28 Feb 2007 13:59:09 -0000 1.6 --- StressTestConcurrent.java 6 Mar 2007 20:38:06 -0000 1.7 *************** *** 48,52 **** package com.bigdata.journal; - import java.io.File; import java.util.Collection; import java.util.HashSet; --- 48,51 ---- *************** *** 385,389 **** properties.setProperty(Options.FORCE_ON_COMMIT,ForceEnum.No.toString()); ! // properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); --- 384,388 ---- properties.setProperty(Options.FORCE_ON_COMMIT,ForceEnum.No.toString()); ! properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); *************** *** 395,398 **** --- 394,405 ---- properties.setProperty(Options.CREATE_TEMP_FILE, "true"); + properties.setProperty(TestOptions.TIMEOUT,"10"); + + properties.setProperty(TestOptions.NCLIENTS,"100"); + + properties.setProperty(TestOptions.KEYLEN,"4"); + + properties.setProperty(TestOptions.NOPS,"4"); + new StressTestConcurrent().doComparisonTest(properties); |
From: Bryan T. <tho...@us...> - 2007-02-28 13:59:20
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv19761/src/test/com/bigdata/journal Modified Files: TestTransactionServer.java TestReadCommittedTx.java TestRootBlockView.java TestReadOnlyTx.java ComparisonTestDriver.java StressTestConcurrent.java TestJournalBasics.java TestTxJournalProtocol.java TestAll.java Added Files: AbstractTestTxRunState.java Removed Files: TestTxRunState.java Log Message: Adds support for read-committed transactions. --- TestTxRunState.java DELETED --- Index: TestJournalBasics.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestJournalBasics.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** TestJournalBasics.java 20 Feb 2007 00:27:03 -0000 1.10 --- TestJournalBasics.java 28 Feb 2007 13:59:09 -0000 1.11 *************** *** 103,107 **** // tests of transitions in the transaction RunState state machine. ! suite.addTestSuite( TestTxRunState.class ); // @todo update these tests of the tx-journal integration. suite.addTestSuite( TestTxJournalProtocol.class ); --- 103,107 ---- // tests of transitions in the transaction RunState state machine. ! suite.addTest( AbstractTestTxRunState.suite() ); // @todo update these tests of the tx-journal integration. suite.addTestSuite( TestTxJournalProtocol.class ); Index: ComparisonTestDriver.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/ComparisonTestDriver.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ComparisonTestDriver.java 22 Feb 2007 16:59:34 -0000 1.2 --- ComparisonTestDriver.java 28 Feb 2007 13:59:09 -0000 1.3 *************** *** 291,294 **** --- 291,296 ---- Iterator<Condition> itr = conditions.iterator(); + int i = 0; + while (itr.hasNext()) { *************** *** 297,301 **** IComparisonTest test = (IComparisonTest) cl.newInstance(); ! System.err.println("Running: "+ condition.name); try { --- 299,303 ---- IComparisonTest test = (IComparisonTest) cl.newInstance(); ! System.err.println("Running "+ condition.name +" ("+i+" of "+nconditions+")"); try { *************** *** 310,319 **** writer.write(condition.result + ", " + condition.name+"\n"); } - writer.flush(); - writer.close(); } --- 312,323 ---- writer.write(condition.result + ", " + condition.name+"\n"); + writer.flush(); + } writer.close(); + i++; + } Index: StressTestConcurrent.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/StressTestConcurrent.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** StressTestConcurrent.java 22 Feb 2007 16:59:34 -0000 1.5 --- StressTestConcurrent.java 28 Feb 2007 13:59:09 -0000 1.6 *************** *** 221,226 **** } ! journal.closeAndDelete(); String msg = "#clients=" --- 221,232 ---- } + + journal.shutdown(); ! if(journal.getFile()!=null) { ! ! journal.getFile().delete(); ! ! } String msg = "#clients=" *************** *** 377,381 **** // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); ! // properties.setProperty(Options.FORCE_ON_COMMIT,ForceEnum.No.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); --- 383,387 ---- // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); ! properties.setProperty(Options.FORCE_ON_COMMIT,ForceEnum.No.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); *************** *** 386,398 **** properties.setProperty(Options.SEGMENT, "0"); - - File file = File.createTempFile("bigdata", ".jnl"); - - file.deleteOnExit(); - - if(!file.delete()) fail("Could not remove temp file before test"); - - properties.setProperty(Options.FILE, file.toString()); new StressTestConcurrent().doComparisonTest(properties); --- 392,398 ---- properties.setProperty(Options.SEGMENT, "0"); + properties.setProperty(Options.CREATE_TEMP_FILE, "true"); + new StressTestConcurrent().doComparisonTest(properties); *************** *** 460,471 **** keyLen, nops); - journal.shutdown(); - - if(journal.getFile()!=null) { - - journal.getFile().delete(); - - } - return msg; --- 460,463 ---- Index: TestTxJournalProtocol.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTxJournalProtocol.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TestTxJournalProtocol.java 22 Feb 2007 16:59:34 -0000 1.2 --- TestTxJournalProtocol.java 28 Feb 2007 13:59:09 -0000 1.3 *************** *** 49,53 **** import java.io.IOException; - import java.util.Properties; /** --- 49,52 ---- *************** *** 56,60 **** * * @todo the tests in this suite are stale and need to be reviewed, possibly ! * revised or replaced, and certainly extended. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 55,63 ---- * * @todo the tests in this suite are stale and need to be reviewed, possibly ! * revised or replaced, and certainly extended. The main issue is that ! * they do not test the basic contract for the journal, transactions, and ! * the transaction manager service but instead test some particulars of ! * the implementation that might not even be the right way to manage that ! * contract. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> *************** *** 78,87 **** Journal journal = new Journal(getProperties()); ! Tx tx0 = new Tx(journal, 0, false); try { // Try to create another transaction with the same identifier. ! new Tx(journal, 0, false); fail("Expecting: " + IllegalStateException.class); --- 81,92 ---- Journal journal = new Journal(getProperties()); ! final long startTime = journal.nextTimestamp(); ! ! Tx tx0 = new Tx(journal, startTime, false); try { // Try to create another transaction with the same identifier. ! new Tx(journal, startTime, false); fail("Expecting: " + IllegalStateException.class); *************** *** 119,130 **** Journal journal = new Journal(getProperties()); ! ITx tx0 = new Tx(journal, 0, false); ! tx0.prepare(); try { ! // Try to create another transaction with the same identifier. ! new Tx(journal, 0, false); fail("Expecting: " + IllegalStateException.class); --- 124,137 ---- Journal journal = new Journal(getProperties()); ! final long startTime = journal.nextTimestamp(); ! ITx tx0 = new Tx(journal, startTime, false); ! ! tx0.prepare(journal.nextTimestamp()); try { ! // Try to create another transaction with the same start time. ! new Tx(journal, startTime, false); fail("Expecting: " + IllegalStateException.class); Index: TestReadCommittedTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestReadCommittedTx.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** TestReadCommittedTx.java 17 Feb 2007 21:34:12 -0000 1.1 --- TestReadCommittedTx.java 28 Feb 2007 13:59:09 -0000 1.2 *************** *** 48,51 **** --- 48,54 ---- package com.bigdata.journal; + import com.bigdata.isolation.UnisolatedBTree; + import com.bigdata.objndx.IIndex; + /** * Test suite for read-committed transactions. *************** *** 69,75 **** } ! public void test_something() { ! fail("write tests"); } --- 72,289 ---- } ! /** ! * Test verifies that you can not write on a read-only transaction. ! */ ! public void test_isReadOnly() { ! Journal journal = new Journal(getProperties()); ! ! String name = "abc"; ! ! final byte[] k1 = new byte[]{1}; ! ! final byte[] v1 = new byte[]{1}; ! ! { ! ! /* ! * register an index, write on the index, and commit the journal. ! */ ! IIndex ndx = journal.registerIndex(name, new UnisolatedBTree( ! journal)); ! ! ndx.insert(k1, v1); ! ! journal.commit(); ! ! } ! ! { ! ! /* ! * create a read-only transaction, verify that we can read the ! * value written on the index but that we can not write on the ! * index. ! */ ! ! final long tx1 = journal.newTx(IsolationEnum.ReadOnly); ! ! IIndex ndx = journal.getIndex(name,tx1); ! ! assertNotNull(ndx); ! ! assertEquals((byte[])v1,(byte[])ndx.lookup(k1)); ! ! try { ! ndx.insert(k1,new byte[]{1,2,3}); ! fail("Expecting: "+UnsupportedOperationException.class); ! } catch( UnsupportedOperationException ex) { ! System.err.println("Ignoring expected exception: "+ex); ! } ! ! journal.commit(tx1); ! ! } ! ! { ! ! /* ! * do it again, but this time we will abort the read-only ! * transaction. ! */ ! ! final long tx1 = journal.newTx(IsolationEnum.ReadOnly); ! ! IIndex ndx = journal.getIndex(name,tx1); ! ! assertNotNull(ndx); ! ! assertEquals((byte[])v1,(byte[])ndx.lookup(k1)); ! ! try { ! ndx.insert(k1,new byte[]{1,2,3}); ! fail("Expecting: "+UnsupportedOperationException.class); ! } catch( UnsupportedOperationException ex) { ! System.err.println("Ignoring expected exception: "+ex); ! } ! ! journal.abort(tx1); ! ! } ! ! journal.closeAndDelete(); ! ! } ! ! /** ! * @todo test that the transaction begins reading from the most recently ! * committed state, that unisolated writes without commits are not ! * visible, that newly committed state shows up in the next index view ! * requested by the tx (so this is either a different index view ! * object or a delegation mechanism that indirects to the current ! * view), and that the same index view object is returned if there ! * have been no intervening commits. ! */ ! public void test_readComittedIsolation() { ! ! Journal journal = new Journal(getProperties()); ! ! String name = "abc"; ! ! final byte[] k1 = new byte[]{1}; ! ! final byte[] v1 = new byte[]{1}; ! ! // create a new read-committed transaction. ! final long ts0 = journal.newTx(IsolationEnum.ReadCommitted); ! ! { ! /* ! * verify that the index is not accessible since it has not been ! * registered. ! */ ! assertNull(journal.getIndex(name,ts0)); ! ! } ! ! { ! ! // register an index and commit the journal. ! ! journal.registerIndex(name, new UnisolatedBTree(journal)); ! ! journal.commit(); ! ! } ! ! { ! ! /* ! * verify that the index is now accessible but that it does not ! * hold any data. ! */ ! ! IIndex ts0_ndx = journal.getIndex(name, ts0); ! ! assertFalse(ts0_ndx.contains(k1)); ! assertNull(ts0_ndx.lookup(k1)); ! assertEquals(0,ts0_ndx.rangeCount(null, null)); ! ! } ! ! { ! // obtain the unisolated index. ! IIndex ndx = journal.getIndex(name); ! ! // write on the index. ! ndx.insert(k1, v1); ! ! } ! ! { ! ! /* ! * verify that the write is not visible since the journal has not ! * been committed. ! */ ! ! IIndex ts0_ndx = journal.getIndex(name, ts0); ! ! assertFalse(ts0_ndx.contains(k1)); ! assertNull(ts0_ndx.lookup(k1)); ! assertEquals(0,ts0_ndx.rangeCount(null, null)); ! ! } ! ! { ! /* ! * commit the journal and verify that the write is now visible to ! * the read-committed transaction. ! */ ! ! journal.commit(); ! ! IIndex ts0_ndx = journal.getIndex(name, ts0); ! ! assertTrue(ts0_ndx.contains(k1)); ! assertEquals(v1,(byte[])ts0_ndx.lookup(k1)); ! assertEquals(1,ts0_ndx.rangeCount(null, null)); ! ! } ! ! { ! /* ! * verify that the write is also visible in a new read-committed ! * transaction. ! */ ! ! long ts1 = journal.newTx(IsolationEnum.ReadCommitted); ! ! IIndex ts1_ndx = journal.getIndex(name, ts1); ! ! assertTrue(ts1_ndx.contains(k1)); ! assertEquals(v1,(byte[])ts1_ndx.lookup(k1)); ! assertEquals(1,ts1_ndx.rangeCount(null, null)); ! ! // should be a nop. ! assertEquals(0,journal.commit(ts1)); ! ! } ! ! // should be a nop. ! journal.abort(ts0); ! ! // close and delete the database. ! journal.closeAndDelete(); ! ! } ! ! /** ! * @todo test protocol for closing index views and releasing holds on commit ! * points. ! */ ! public void test_releaseViews() { ! ! fail("write test"); } Index: TestTransactionServer.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTransactionServer.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TestTransactionServer.java 19 Feb 2007 19:00:18 -0000 1.2 --- TestTransactionServer.java 28 Feb 2007 13:59:09 -0000 1.3 *************** *** 48,52 **** package com.bigdata.journal; - import com.bigdata.journal.TransactionServer.IsolationEnum; import junit.framework.TestCase; --- 48,51 ---- Index: TestAll.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestAll.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestAll.java 22 Feb 2007 16:59:34 -0000 1.9 --- TestAll.java 28 Feb 2007 13:59:09 -0000 1.10 *************** *** 103,106 **** --- 103,114 ---- suite.addTest( TestTransientJournal.suite() ); suite.addTest( TestDirectJournal.suite() ); + /* + * Note: The mapped journal is somewhat problematic and its tests are + * disabled for the moment since (a) we have to pre-allocate large + * extends; (b) it does not perform any better than other options; and + * (c) we can not synchronously unmap or delete a mapped file which + * makes cleanup of the test suites difficult and winds up spewing 200M + * files all over your temp directory. + */ // suite.addTest( TestMappedJournal.suite() ); suite.addTest( TestDiskJournal.suite() ); Index: TestRootBlockView.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestRootBlockView.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestRootBlockView.java 19 Feb 2007 19:00:18 -0000 1.9 --- TestRootBlockView.java 28 Feb 2007 13:59:09 -0000 1.10 *************** *** 110,117 **** assertEquals("segmentId", segmentId, rootBlock.getSegmentId()); assertEquals("nextOffset", nextOffset, rootBlock.getNextOffset()); ! assertEquals("firstTxId", firstTxId, rootBlock.getFirstTxCommitTime()); ! assertEquals("lastTxId", lastTxId, rootBlock.getLastTxCommitTime()); assertEquals("commitCounter", commitCounter, rootBlock.getCommitCounter()); ! assertEquals("commitTimestamp", commitTimestamp, rootBlock.getCommitTimestamp()); assertEquals("commitRecordAddr", commitRecordAddr, rootBlock.getCommitRecordAddr()); assertEquals("commitRecordIndexAddr", commitRecordIndexAddr, rootBlock.getCommitRecordIndexAddr()); --- 110,117 ---- assertEquals("segmentId", segmentId, rootBlock.getSegmentId()); assertEquals("nextOffset", nextOffset, rootBlock.getNextOffset()); ! assertEquals("firstTxId", firstTxId, rootBlock.getFirstCommitTime()); ! assertEquals("lastTxId", lastTxId, rootBlock.getLastCommitTime()); assertEquals("commitCounter", commitCounter, rootBlock.getCommitCounter()); ! assertEquals("commitTime", commitTimestamp, rootBlock.getCommitTimestamp()); assertEquals("commitRecordAddr", commitRecordAddr, rootBlock.getCommitRecordAddr()); assertEquals("commitRecordIndexAddr", commitRecordIndexAddr, rootBlock.getCommitRecordIndexAddr()); *************** *** 125,132 **** assertEquals("segmentId", segmentId, rootBlock.getSegmentId()); assertEquals("nextOffset", nextOffset, rootBlock.getNextOffset()); ! assertEquals("firstTxId", firstTxId, rootBlock.getFirstTxCommitTime()); ! assertEquals("lastTxId", lastTxId, rootBlock.getLastTxCommitTime()); assertEquals("commitCounter", commitCounter, rootBlock.getCommitCounter()); ! assertEquals("commitTimestamp", commitTimestamp, rootBlock.getCommitTimestamp()); assertEquals("commitRecordAddr", commitRecordAddr, rootBlock.getCommitRecordAddr()); assertEquals("commitRecordIndexAddr", commitRecordIndexAddr, rootBlock.getCommitRecordIndexAddr()); --- 125,132 ---- assertEquals("segmentId", segmentId, rootBlock.getSegmentId()); assertEquals("nextOffset", nextOffset, rootBlock.getNextOffset()); ! assertEquals("firstTxId", firstTxId, rootBlock.getFirstCommitTime()); ! assertEquals("lastTxId", lastTxId, rootBlock.getLastCommitTime()); assertEquals("commitCounter", commitCounter, rootBlock.getCommitCounter()); ! assertEquals("commitTime", commitTimestamp, rootBlock.getCommitTimestamp()); assertEquals("commitRecordAddr", commitRecordAddr, rootBlock.getCommitRecordAddr()); assertEquals("commitRecordIndexAddr", commitRecordIndexAddr, rootBlock.getCommitRecordIndexAddr()); --- NEW FILE: AbstractTestTxRunState.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 13, 2007 */ package com.bigdata.journal; import java.io.IOException; import junit.framework.TestSuite; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.IIndex; /** * Test suite for the state machine governing the transaction {@link RunState} * transitions. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ abstract public class AbstractTestTxRunState extends ProxyTestCase { public static TestSuite suite() { TestSuite suite = new TestSuite("Transaction run state"); suite.addTestSuite(TestReadCommitted.class); suite.addTestSuite(TestReadOnly.class); suite.addTestSuite(TestReadWrite.class); return suite; } /** * */ public AbstractTestTxRunState() { } /** * @param name */ public AbstractTestTxRunState(String name) { super(name); } /** * Return a new transaction start time. */ abstract public long newTx(Journal journal); public static class TestReadCommitted extends AbstractTestTxRunState { public TestReadCommitted() { } public TestReadCommitted(String name) { super(name); } public long newTx(Journal journal) { return journal.newTx(IsolationEnum.ReadCommitted); } } public static class TestReadOnly extends AbstractTestTxRunState { public TestReadOnly() { } public TestReadOnly(String name) { super(name); } public long newTx(Journal journal) { return journal.newTx(IsolationEnum.ReadOnly); } } public static class TestReadWrite extends AbstractTestTxRunState { public TestReadWrite() { } public TestReadWrite(String name) { super(name); } public long newTx(Journal journal) { return journal.newTx(IsolationEnum.ReadWrite); } } /* * Transaction run state tests. */ /** * Simple test of the transaction run state machine. */ public void test_runStateMachine_activeAbort() throws IOException { Journal journal = new Journal(getProperties()); assertTrue(journal.activeTx.isEmpty()); assertTrue(journal.preparedTx.isEmpty()); final long ts0 = newTx(journal); final ITx tx0 = journal.getTx(ts0); assertEquals(ts0, tx0.getStartTimestamp()); assertTrue(tx0 == journal.getTx(ts0)); assertTrue(tx0.isActive()); assertFalse(tx0.isPrepared()); assertFalse(tx0.isAborted()); assertFalse(tx0.isCommitted()); assertFalse(tx0.isComplete()); assertTrue(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.abort(ts0); /* * note: when the abort is asynchronous, i.e., for a read-write * transaction, this causes the test to wait until the abort task has * been executed. */ journal.commitService.shutdown(); assertFalse(tx0.isActive()); assertFalse(tx0.isPrepared()); assertTrue(tx0.isAborted()); assertFalse(tx0.isCommitted()); assertTrue(tx0.isComplete()); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.closeAndDelete(); } /** * Simple test of the transaction run state machine. */ public void test_runStateMachine_activePrepareAbort() throws IOException { Journal journal = new Journal(getProperties()); assertTrue(journal.activeTx.isEmpty()); assertTrue(journal.preparedTx.isEmpty()); final long ts0 = newTx(journal); final ITx tx0 = journal.getTx(ts0); assertEquals(ts0, tx0.getStartTimestamp()); assertTrue(tx0 == journal.getTx(ts0)); assertTrue( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertTrue(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); final long commitTime = (tx0.isReadOnly()?0L:journal.nextTimestamp()); tx0.prepare(commitTime); assertFalse( tx0.isActive() ); assertTrue( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertTrue(journal.preparedTx.containsKey(ts0)); tx0.abort(); assertFalse( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertTrue( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertTrue( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.closeAndDelete(); } /** * Simple test of the transaction run state machine. */ public void test_runStateMachine_activePrepareCommit() throws IOException { Journal journal = new Journal(getProperties()); assertTrue(journal.activeTx.isEmpty()); assertTrue(journal.preparedTx.isEmpty()); final long ts0 = newTx(journal); final ITx tx0 = journal.getTx(ts0); assertEquals(ts0, tx0.getStartTimestamp()); assertTrue(tx0 == journal.getTx(ts0)); assertTrue( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertTrue(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); final long commitTime = (tx0.isReadOnly() ? 0L : journal .nextTimestamp()); tx0.prepare(commitTime); assertFalse( tx0.isActive() ); assertTrue( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertTrue(journal.preparedTx.containsKey(ts0)); assertEquals(commitTime,tx0.commit()); assertFalse( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertTrue( tx0.isCommitted() ); assertTrue( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.closeAndDelete(); } /** * Simple test of the transaction run state machine. verifies that a 2nd * attempt to abort the same transaction results in an exception that does * not change the transaction run state. */ public void test_runStateMachine_activeAbortAbort_correctRejection() throws IOException { Journal journal = new Journal(getProperties()); assertTrue(journal.activeTx.isEmpty()); assertTrue(journal.preparedTx.isEmpty()); final long ts0 = newTx(journal); final ITx tx0 = journal.getTx(ts0); assertEquals(ts0, tx0.getStartTimestamp()); assertTrue(tx0 == journal.getTx(ts0)); assertTrue(tx0.isActive()); assertFalse(tx0.isPrepared()); assertFalse(tx0.isAborted()); assertFalse(tx0.isCommitted()); assertFalse(tx0.isComplete()); assertTrue(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); tx0.abort(); assertFalse(tx0.isActive()); assertFalse(tx0.isPrepared()); assertTrue(tx0.isAborted()); assertFalse(tx0.isCommitted()); assertTrue(tx0.isComplete()); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); try { tx0.abort(); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } assertFalse(tx0.isActive()); assertFalse(tx0.isPrepared()); assertTrue(tx0.isAborted()); assertFalse(tx0.isCommitted()); assertTrue(tx0.isComplete()); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.closeAndDelete(); } /** * Simple test of the transaction run state machine verifies that a 2nd * attempt to prepare the same transaction results in an exception that * changes the transaction run state to 'aborted'. */ public void test_runStateMachine_activePreparePrepare_correctRejection() throws IOException { Journal journal = new Journal(getProperties()); assertTrue(journal.activeTx.isEmpty()); assertTrue(journal.preparedTx.isEmpty()); final long ts0 = newTx(journal); final ITx tx0 = journal.getTx(ts0); assertEquals(ts0, tx0.getStartTimestamp()); assertTrue(tx0 == journal.getTx(ts0)); assertTrue( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertTrue(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); final long commitTime = (tx0.isReadOnly() ? 0L : journal .nextTimestamp()); tx0.prepare(commitTime); assertFalse( tx0.isActive() ); assertTrue( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertTrue(journal.preparedTx.containsKey(ts0)); try { tx0.prepare(commitTime); fail("Expecting: "+IllegalStateException.class); } catch( IllegalStateException ex ) { System.err.println("Ignoring expected exception: "+ex); } assertFalse( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertTrue( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertTrue( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.closeAndDelete(); } /** * Simple test of the transaction run state machine verifies that a commit * out of order results in an exception that changes the transaction run * state to 'aborted'. */ public void test_runStateMachine_activeCommit_correctRejection() throws IOException { Journal journal = new Journal(getProperties()); assertTrue(journal.activeTx.isEmpty()); assertTrue(journal.preparedTx.isEmpty()); final long ts0 = newTx(journal); final ITx tx0 = journal.getTx(ts0); assertEquals(ts0, tx0.getStartTimestamp()); assertTrue(tx0 == journal.getTx(ts0)); assertTrue( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertFalse( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertFalse( tx0.isComplete() ); assertTrue(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); try { tx0.commit(); fail("Expecting: "+IllegalStateException.class); } catch( IllegalStateException ex ) { System.err.println("Ignoring expected exception: "+ex); } assertFalse( tx0.isActive() ); assertFalse( tx0.isPrepared() ); assertTrue( tx0.isAborted() ); assertFalse( tx0.isCommitted() ); assertTrue( tx0.isComplete() ); assertFalse(journal.activeTx.containsKey(ts0)); assertFalse(journal.preparedTx.containsKey(ts0)); journal.closeAndDelete(); } /** * Verifies that access to, and operations on, a named indices is denied * after a PREPARE. * * @throws IOException * * @todo also test after an abort. */ public void test_runStateMachine_prepared_correctRejection() throws IOException { Journal journal = new Journal(getProperties()); String name = "abc"; { journal.registerIndex(name, new UnisolatedBTree(journal)); journal.commit(); } final long tx0 = newTx(journal); ITx tmp = journal.getTx(tx0); assertNotNull(tmp); IIndex ndx = journal.getIndex(name,tx0); assertNotNull(ndx); // commit the journal. journal.commit(tx0); /* * Verify that you can not access a named index after 'prepare'. */ try { journal.getIndex(name,tx0); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } /* * Verify that operations on an pre-existing index reference are now * denied. */ try { ndx.lookup(new byte[] { 1 }); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } try { ndx.contains(new byte[] { 1 }); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } try { ndx.remove(new byte[] { 1 }); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } try { ndx.insert(new byte[] { 1 }, new byte[] { 2 }); fail("Expecting: " + IllegalStateException.class); } catch (IllegalStateException ex) { System.err.println("Ignoring expected exception: " + ex); } assertFalse(tmp.isActive()); assertTrue(tmp.isPrepared()); assertFalse(tmp.isAborted()); assertFalse(tmp.isCommitted()); assertFalse(tmp.isComplete()); assertFalse(journal.activeTx.containsKey(tmp.getStartTimestamp())); assertFalse(journal.preparedTx.containsKey(tmp.getStartTimestamp())); assertNull(journal.getTx(tmp.getStartTimestamp())); journal.closeAndDelete(); } // /** // * Verifies that access to, and operations on, a named indices is denied // * after an ABORT. // * // * @throws IOException // */ // public void test_runStateMachine_aborted_correctRejection() // throws IOException { // // final Properties properties = getProperties(); // // Journal journal = new Journal(properties); // // String name = "abc"; // // { // // journal.registerIndex(name, new UnisolatedBTree(journal)); // // journal.commit(); // // } // // ITx tx0 = journal.newTx(); // // IIndex ndx = tx0.getIndex(name); // // assertNotNull(ndx); // // tx0.abort(); // // /* // * Verify that you can not access a named index. // */ // try { // tx0.getIndex(name); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // // /* // * Verify that operations on an pre-existing index reference are now // * denied. // */ // try { // ndx.lookup(new byte[] { 1 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // try { // ndx.contains(new byte[] { 1 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // try { // ndx.remove(new byte[] { 1 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // try { // ndx.insert(new byte[] { 1 }, new byte[] { 2 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // // assertFalse(tx0.isActive()); // assertFalse(tx0.isPrepared()); // assertTrue (tx0.isAborted()); // assertFalse(tx0.isCommitted()); // assertTrue (tx0.isComplete()); // // assertFalse(journal.activeTx.containsKey(tx0.getStartTimestamp())); // assertFalse(journal.preparedTx.containsKey(tx0.getStartTimestamp())); // assertNull(journal.getTx(tx0.getStartTimestamp())); // // journal.close(); // // } // // /** // * Verifies that access to, and operations on, a named indices is denied // * after a COMMIT. // * // * @throws IOException // */ // public void test_runStateMachine_commit_correctRejection() // throws IOException { // // final Properties properties = getProperties(); // // Journal journal = new Journal(properties); // // String name = "abc"; // // { // // journal.registerIndex(name, new UnisolatedBTree(journal)); // // journal.commit(); // // } // // ITx tx0 = journal.newTx(); // // IIndex ndx = tx0.getIndex(name); // // assertNotNull(ndx); // // tx0.prepare(); // tx0.commit(); // // /* // * Verify that you can not access a named index. // */ // try { // tx0.getIndex(name); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // // /* // * Verify that operations on an pre-existing index reference are now // * denied. // */ // try { // ndx.lookup(new byte[] { 1 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // try { // ndx.contains(new byte[] { 1 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // try { // ndx.remove(new byte[] { 1 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // try { // ndx.insert(new byte[] { 1 }, new byte[] { 2 }); // fail("Expecting: " + IllegalStateException.class); // } catch (IllegalStateException ex) { // System.err.println("Ignoring expected exception: " + ex); // } // // assertFalse(tx0.isActive()); // assertFalse(tx0.isPrepared()); // assertFalse(tx0.isAborted()); // assertTrue(tx0.isCommitted()); // assertTrue(tx0.isComplete()); // // assertFalse(journal.activeTx.containsKey(tx0.getStartTimestamp())); // assertFalse(journal.preparedTx.containsKey(tx0.getStartTimestamp())); // assertNull(journal.getTx(tx0.getStartTimestamp())); // // journal.close(); // // } } Index: TestReadOnlyTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestReadOnlyTx.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TestReadOnlyTx.java 22 Feb 2007 16:59:34 -0000 1.4 --- TestReadOnlyTx.java 28 Feb 2007 13:59:09 -0000 1.5 *************** *** 48,53 **** package com.bigdata.journal; - import java.util.Properties; - import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.IIndex; --- 48,51 ---- *************** *** 109,113 **** */ ! final long tx1 = journal.newTx(true); IIndex ndx = journal.getIndex(name,tx1); --- 107,111 ---- */ ! final long tx1 = journal.newTx(IsolationEnum.ReadOnly); IIndex ndx = journal.getIndex(name,tx1); *************** *** 135,139 **** */ ! final long tx1 = journal.newTx(true); IIndex ndx = journal.getIndex(name,tx1); --- 133,137 ---- */ ! final long tx1 = journal.newTx(IsolationEnum.ReadOnly); IIndex ndx = journal.getIndex(name,tx1); |
From: Bryan T. <tho...@us...> - 2007-02-28 13:59:14
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv19761/src/java/com/bigdata/journal Modified Files: Tx.java ITx.java IRootBlockView.java ValidationError.java RootBlockView.java TransactionServer.java ITransactionManager.java Journal.java Added Files: ReadCommittedTx.java AbstractTx.java IsolationEnum.java Log Message: Adds support for read-committed transactions. --- NEW FILE: ReadCommittedTx.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 27, 2007 */ package com.bigdata.journal; import com.bigdata.isolation.IIsolatableIndex; import com.bigdata.isolation.IIsolatedIndex; import com.bigdata.objndx.BatchContains; import com.bigdata.objndx.BatchInsert; import com.bigdata.objndx.BatchLookup; import com.bigdata.objndx.BatchRemove; import com.bigdata.objndx.IEntryIterator; import com.bigdata.objndx.IIndex; /** * A read-committed transaction provides a read-only view onto the current * committed state of the database. Each time a view of an index is requested * using {@link #getIndex(String)} the returned view will provide access to the * most recent committed state for that index. Unlike a fully isolated * transaction, a read-committed transaction does NOT provide a consistent view * of the database over time. However, a read-committed transaction imposes * fewer constraints on when old resources (historical journals and index * segments) may be released. For this reason, a read-committed transaction is a * good choice when a very-long running read must be performed on the database. * Since a read-committed transaction does not allow writes, the commit and * abort protocols are identical. * * @todo In order to release the resourcs associated with a commit point * (historical journals and index segments) we need a protocol by which a * delegate index view is explicitly closed (or collected using a weak * value cache) once it is no longer in use for an operation. The index * views need to be accumulated on a commit point (aka commit record). * When no index views for a given commit record are active, the commit * point is no longer accessible to the read-committed transaction and * should be released. Resources (journals and index segments) required to * present views on that commit point MAY be released once there are no * longer any fully isolated transactions whose start time would select * that commit point as their ground state. * * @todo We may not even need a start time for a read-committed transaction * since it always reads from the most recent commit record, in which case * it could be started and finished with lower latency than a * fully-isolated read-only or read-write transaction. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class ReadCommittedTx extends AbstractTx implements ITx { public ReadCommittedTx(Journal journal, long startTime ) { super(journal, startTime, true /*readOnly*/); } /** * Return a read-only view of the named index with read-committed isolation. * * @return The index or <code>null</code> if the named index is not * registered. */ public IIndex getIndex(String name) { if (journal.getIndex(name) == null) { /* * The named index is not registered at this time. */ return null; } return new ReadCommittedIndex(this,name); } /** * Light-weight implementation of a read-committed index view. * <p> * A delegation model is used since commits or overflows of the journal * might invalidate the index objects that actually read on the journal * and/or index segments. The delegation strategy checks the commit counters * and looks up a new delegate index object each time the commit counter is * updated. In this way, newly committed data are always made visible to the * next operation on the index. In-progress writes are NOT visible since we * only read from a delegate index discovered by resolving the index name * against the most recent {@link ICommitRecord}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ * * @todo if we add an extensible batch index operation then we need to make * sure that it is not possible to use that to circumvent the * read-only contract and write on the unisolated delegate index. For * the same reason, it makes sense to make this class <em>final</em> * and to make {@link #getIndex()} private. */ public static class ReadCommittedIndex implements IIndex, IIsolatedIndex { /** * The transaction. */ final protected ReadCommittedTx tx; /** * The name of the index. */ final protected String name; /** * The last commit record for which an index was returned. */ private ICommitRecord commitRecord; /** * The last index returned. */ private IIsolatableIndex index; public ReadCommittedIndex(ReadCommittedTx tx, String name) { assert tx != null; assert tx.isActive(); assert name != null; this.tx = tx; this.name = name; this.index = getIndex(); } /** * Return the current {@link IIsolatableIndex} view. The read-committed * view simply exposes this as a read-only {@link IIsolatedIndex}. * * @return The current unisolated index on the journal (read-write). * * @exception IllegalStateException * if the named index is not registered. */ protected IIsolatableIndex getIndex() { /* * Obtain the most current {@link ICommitRecord} on the journal. All * read operations are against the named index as resolved using * this commit record. */ ICommitRecord currentCommitRecord = tx.journal.getCommitRecord(); if (commitRecord != null && index != null && commitRecord.getCommitCounter() == currentCommitRecord .getCommitCounter()) { /* * the commit record has not changed so we have the correct * index view. */ return index; } // update the commit record. this.commitRecord = currentCommitRecord; // lookup the current index view against that commit record. this.index = (IIsolatableIndex) tx.journal.getIndex(name, commitRecord); if(index == null) { throw new IllegalStateException("Index not defined: "+name); } return index; } public boolean contains(byte[] key) { return getIndex().contains(key); } /** * @exception UnsupportedOperationException always. */ public Object insert(Object key, Object value) { throw new UnsupportedOperationException(); } public Object lookup(Object key) { return getIndex().lookup(key); } /** * @exception UnsupportedOperationException always. */ public Object remove(Object key) { throw new UnsupportedOperationException(); } public int rangeCount(byte[] fromKey, byte[] toKey) { return getIndex().rangeCount(fromKey, toKey); } public IEntryIterator rangeIterator(byte[] fromKey, byte[] toKey) { return getIndex().rangeIterator(fromKey, toKey); } public void contains(BatchContains op) { getIndex().contains(op); } /** * @exception UnsupportedOperationException always. */ public void insert(BatchInsert op) { throw new UnsupportedOperationException(); } public void lookup(BatchLookup op) { getIndex().lookup(op); } /** * @exception UnsupportedOperationException always. */ public void remove(BatchRemove op) { throw new UnsupportedOperationException(); } } } --- NEW FILE: AbstractTx.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 27, 2007 */ package com.bigdata.journal; /** * An abstract base class that encapsulates the run state transitions and * constraints for transactions. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ abstract public class AbstractTx implements ITx { /* * Text for error messages. */ final static String NOT_ACTIVE = "Not active"; final static String NOT_PREPARED = "Transaction is not prepared"; final static String NOT_COMMITTED = "Transaction is not committed"; final static String IS_COMPLETE = "Transaction is complete"; /** * The transaction uses the {@link Journal} for some handshaking in the * commit protocol and to locate the named indices that it isolates. */ final protected Journal journal; /** * The start startTime assigned to this transaction. * <p> * Note: Transaction {@link #startTime} and {@link #commitTime}s * are assigned by a global time service. The time service must provide * unique times for transaction start and commit timestamps and for commit * times for unisolated {@link Journal#commit()}s. */ final protected long startTime; /** * The pre-computed hash code for the transaction (based on the start time). */ final private int hashCode; /** * The commit time assigned to this transaction and zero if the transaction * has not prepared or is not writable. */ private long commitTime = 0L; /** * True iff the transaction is read only and will reject writes. */ final protected boolean readOnly; private RunState runState; protected AbstractTx(Journal journal, long startTime, boolean readOnly ) { if (journal == null) throw new IllegalArgumentException(); assert startTime != 0L; this.journal = journal; this.startTime = startTime; this.readOnly = readOnly; // pre-compute the hash code for the transaction. this.hashCode = Long.valueOf(startTime).hashCode(); journal.activateTx(this); this.runState = RunState.ACTIVE; } /** * The hash code is based on the {@link #getStartTimestamp()}. */ final public int hashCode() { return hashCode; } /** * True iff they are the same object or have the same start timestamp. * * @param o * Another transaction object. */ final public boolean equals(ITx o) { return this == o || startTime == o.getStartTimestamp(); } final public long getStartTimestamp() { return startTime; } final public long getCommitTimestamp() { if(readOnly) { throw new UnsupportedOperationException(); } switch(runState) { case ACTIVE: case ABORTED: throw new IllegalStateException(); case PREPARED: case COMMITTED: if(commitTime == 0L) throw new AssertionError(); return commitTime; } throw new AssertionError(); } /** * Returns a string representation of the transaction start time. */ final public String toString() { return ""+startTime; } final public boolean isReadOnly() { return readOnly; } final public boolean isActive() { return runState == RunState.ACTIVE; } final public boolean isPrepared() { return runState == RunState.PREPARED; } final public boolean isComplete() { return runState == RunState.COMMITTED || runState == RunState.ABORTED; } final public boolean isCommitted() { return runState == RunState.COMMITTED; } final public boolean isAborted() { return runState == RunState.ABORTED; } final public void abort() { if (isComplete()) throw new IllegalStateException(IS_COMPLETE); try { runState = RunState.ABORTED; journal.completedTx(this); } finally { releaseResources(); } } final public void prepare(long commitTime) { if( ! isActive() ) { if( ! isComplete() ) { abort(); } throw new IllegalStateException(NOT_ACTIVE); } if (!readOnly) { try { assert commitTime != 0L; // save the assigned commit time. this.commitTime = commitTime; /* * Validate against the current state of the various indices * on write the transaction has written. */ if (!validateWriteSets()) { abort(); throw new ValidationError(); } } catch( ValidationError ex) { throw ex; } catch (Throwable t) { abort(); throw new RuntimeException("Unexpected error: " + t, t); } } else { // a read-only tx does not have a commit time. assert commitTime == 0L; } journal.prepared(this); runState = RunState.PREPARED; } final public long commit() { if( ! isPrepared() ) { if( ! isComplete() ) { abort(); } throw new IllegalStateException(NOT_PREPARED); } // The commitTime is zero unless this is a writable transaction. final long commitTime = readOnly ? 0L : getCommitTimestamp(); try { if(!readOnly) { /* * Merge each isolated index into the global scope. This also * marks the slots used by the versions written by the * transaction as 'committed'. This operation MUST succeed (at a * logical level) since we have already validated (neither * read-write nor write-write conflicts exist). * * @todo Non-transactional operations on the global scope should * be either disallowed entirely or locked out during the * prepare-commit protocol when using transactions since they * (a) could invalidate the pre-condition for the merge; and (b) * uncommitted changes would be discarded if the merge operation * fails. One solution is to use batch operations or group * commit mechanism to dynamically create transactions from * unisolated operations. */ mergeOntoGlobalState(); // Atomic commit. journal.commitNow(commitTime); } runState = RunState.COMMITTED; journal.completedTx(this); } catch( Throwable t) { /* * If the operation fails then we need to discard any changes that * have been merged down into the global state. Failure to do this * will result in those changes becoming restart-safe when the next * transaction commits! * * Discarding registered committers is legal if we are observing * serializability; that is, if no one writes on the global state * for a restart-safe btree except mergeOntoGlobalState(). When this * constraint is observed it is impossible for there to be * uncommitted changes when we begin to merge down onto the store * and any changes may simply be discarded. * * Note: we can not simply reload the current root block (or reset * the nextOffset to be assigned) since concurrent transactions may * be writing non-restart safe data on the store in their own * isolated btrees. */ journal.abort(); abort(); throw new RuntimeException( t ); } finally { releaseResources(); } return commitTime; } /** * Invoked when a writable transaction prepares in order to validate its * write sets (one per isolated index). The default implementation is NOP. * * @return true iff the write sets were validated. */ protected boolean validateWriteSets() { // NOP. return true; } /** * Invoked during commit processing to merge down the write set from each * index isolated by this transactions onto the corresponding unisolated * index on the database. This method invoked iff a transaction has * successfully prepared and hence is known to have validated successfully. * The default implementation is a NOP. */ protected void mergeOntoGlobalState() { } /** * This method must be invoked any time a transaction completes ({@link #abort()}s * or {@link #commit()}s) in order to release resources held by that * transaction. The default implementation is a NOP and must be extended if * a transaction holds state. */ protected void releaseResources() { // NOP. } } Index: ITx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITx.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** ITx.java 21 Feb 2007 20:17:21 -0000 1.4 --- ITx.java 28 Feb 2007 13:59:10 -0000 1.5 *************** *** 60,74 **** /** ! * The transaction identifier (aka timestamp). * ! * @return The transaction identifier (aka timestamp). */ public long getStartTimestamp(); /** ! * Validate the write set for the transaction. * * @exception IllegalStateException ! * If the transaction is not active. If the transaction is * not complete, then it will be aborted. * --- 60,100 ---- /** ! * The start time for the transaction as assigned by a centralized ! * transaction manager service. Transaction start times are unique and also ! * serve as transaction identifiers. Note that this is NOT the time at which ! * a transaction begins executing on a specific journal as the same ! * transaction may start at different moments on different journals and ! * typically will only start on some journals rather than all. * ! * @return The transaction start time. */ public long getStartTimestamp(); /** ! * Return the commit timestamp assigned to this transaction by a centralized ! * transaction manager service. ! * ! * @return The commit timestamp assigned to this transaction. ! * ! * @exception UnsupportedOperationException ! * unless the transaction is writable. * * @exception IllegalStateException ! * if the transaction is writable but has not yet prepared ( ! * the commit time is assigned when the transaction is ! * prepared). ! */ ! public long getCommitTimestamp(); ! ! /** ! * Prepare the transaction for a {@link #commit()} by validating the write ! * set for each index isolated by the transaction. ! * ! * @param commitTime ! * The commit time assigned by a centralized transaction manager ! * service -or- ZERO (0L) IFF the transaction is read-only. ! * ! * @exception IllegalStateException ! * if the transaction is not active. If the transaction is * not complete, then it will be aborted. * *************** *** 77,91 **** * is thrown, then the transaction was aborted. */ ! public void prepare(); /** ! * Commit the transaction. * ! * @return The commit time assigned to the transactions. * * @exception IllegalStateException ! * If the transaction has not {@link #prepare() prepared}. * If the transaction is not already complete, then it is * aborted. */ public long commit(); --- 103,122 ---- * is thrown, then the transaction was aborted. */ ! public void prepare(long commitTime); /** ! * Commit a transaction that has already been {@link #prepare(long)}d. * ! * @return The commit time assigned to the transactions -or- 0L if the ! * transaction was read-only. * * @exception IllegalStateException ! * If the transaction has not {@link #prepare(long) prepared}. * If the transaction is not already complete, then it is * aborted. + * + * @return The commit timestamp assigned by a centralized transaction + * manager service or <code>0L</code> if the transaction was + * read-only. */ public long commit(); *************** *** 152,158 **** * state of this transaction. * <p> ! * During {@link #prepare()}, the write set of each {@link IsolatedBTree} ! * will be validated against the then current commited state of the named ! * index. * <p> * During {@link #commit()}, the validated write sets will be merged down --- 183,189 ---- * state of this transaction. * <p> ! * During {@link #prepare(long)}, the write set of each ! * {@link IsolatedBTree} will be validated against the then current commited ! * state of the named index. * <p> * During {@link #commit()}, the validated write sets will be merged down Index: ITransactionManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITransactionManager.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ITransactionManager.java 22 Feb 2007 16:59:34 -0000 1.2 --- ITransactionManager.java 28 Feb 2007 13:59:10 -0000 1.3 *************** *** 48,52 **** --- 48,55 ---- package com.bigdata.journal; + import com.bigdata.isolation.IConflictResolver; + import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.IIndex; + import com.bigdata.objndx.IndexSegment; /** *************** *** 67,93 **** /** ! * Create a new fully-isolated transaction. ! * ! * @param readOnly ! * When true, the transaction will reject writes. ! * ! * @return The transaction start time, which serves as the unique identifier ! * for the transaction. ! */ ! public long newTx(boolean readOnly); ! ! /** ! * Create a new read-committed transaction. The transaction will reject ! * writes. Any data committed by concurrent transactions will become visible ! * to indices isolated by this transaction (hence, "read comitted"). * <p> ! * This provides more isolation than "read dirty" since the concurrent ! * transactions MUST commit before their writes become visible to the a ! * read-committed transaction. * * @return The transaction start time, which serves as the unique identifier * for the transaction. */ ! public long newReadCommittedTx(); /** --- 70,134 ---- /** ! * Create a new transaction. * <p> ! * The concurrency control algorithm is MVCC, so readers never block and ! * only write-write conflicts can arise. It is possible to register an ! * {@link UnisolatedBTree} with an {@link IConflictResolver} in order to ! * present the application with an opportunity to validate write-write ! * conflicts using state-based techniques (i.e., by looking at the records ! * and timestamps and making an informed decision). ! * <p> ! * MVCC requires a strategy to release old versions that are no longer ! * accessible to active transactions. bigdata uses a highly efficient ! * technique in which writes are multiplexed onto append-only ! * {@link Journal}s and then evicted on overflow into {@link IndexSegment}s ! * using a bulk index build mechanism. Old journal and index segment ! * resources are simply deleted from the file system some time after they ! * are no longer accessible to active transactions. ! * ! * @param level ! * The isolation level. The following isolation levels are ! * supported: ! * <dl> ! * <dt>{@link IsolationEnum#ReadCommitted}</dt> ! * <dd>A read-only transaction in which data become visible ! * within the transaction as concurrent transactions commit. This ! * is suitable for very long read processes that do not require a ! * fully consistent view of the data.</dd> ! * <dt>{@link IsolationEnum#ReadOnly}</dt> ! * <dd>A fully isolated read-only transaction.</dd> ! * <dt>{@link IsolationEnum#ReadWrite}</dt> ! * <dd>A fully isolated read-write transaction.</dd> ! * </dl> * * @return The transaction start time, which serves as the unique identifier * for the transaction. */ ! public long newTx(IsolationEnum level); ! ! // /** ! // * Create a new fully-isolated transaction. ! // * ! // * @param readOnly ! // * When true, the transaction will reject writes. ! // * ! // * @return The transaction start time, which serves as the unique identifier ! // * for the transaction. ! // */ ! // public long newTx(boolean readOnly); ! // ! // /** ! // * Create a new read-committed transaction. The transaction will reject ! // * writes. Any data committed by concurrent transactions will become visible ! // * to indices isolated by this transaction (hence, "read comitted"). ! // * <p> ! // * This provides more isolation than "read dirty" since the concurrent ! // * transactions MUST commit before their writes become visible to the a ! // * read-committed transaction. ! // * ! // * @return The transaction start time, which serves as the unique identifier ! // * for the transaction. ! // */ ! // public long newReadCommittedTx(); /** *************** *** 96,100 **** * @param name * The index name. ! * @param ts * The transaction start time, which serves as the unique * identifier for the transaction. --- 137,141 ---- * @param name * The index name. ! * @param startTime * The transaction start time, which serves as the unique * identifier for the transaction. *************** *** 108,117 **** * if there is no active transaction with that timestamp. */ ! public IIndex getIndex(String name, long ts); /** * Abort the transaction. * ! * @param ts * The transaction start time, which serves as the unique * identifier for the transaction. --- 149,158 ---- * if there is no active transaction with that timestamp. */ ! public IIndex getIndex(String name, long startTime); /** * Abort the transaction. * ! * @param startTime * The transaction start time, which serves as the unique * identifier for the transaction. *************** *** 120,129 **** * if there is no active transaction with that timestamp. */ ! public void abort(long ts); /** * Commit the transaction. * ! * @param ts * The transaction start time, which serves as the unique * identifier for the transaction. --- 161,170 ---- * if there is no active transaction with that timestamp. */ ! public void abort(long startTime); /** * Commit the transaction. * ! * @param startTime * The transaction start time, which serves as the unique * identifier for the transaction. *************** *** 134,138 **** * if there is no active transaction with that timestamp. */ ! public long commit(long ts); } --- 175,179 ---- * if there is no active transaction with that timestamp. */ ! public long commit(long startTime); } Index: TransactionServer.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TransactionServer.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TransactionServer.java 20 Feb 2007 00:27:03 -0000 1.4 --- TransactionServer.java 28 Feb 2007 13:59:10 -0000 1.5 *************** *** 125,158 **** /** - * Isolation levels for a transaction. - * - * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ - */ - public static enum IsolationEnum { - - /** - * A fully isolated read-write transaction. - */ - ReadWrite(), - - /** - * A fully isolated read-only transaction. - */ - ReadOnly(), - - /** - * A read-only transaction that will read any data successfully - * committed on the database (the view provided by the transaction does - * not remain valid as of the transaction start time but evolves as - * concurrent transactions commit). - */ - ReadCommitted(); - - private IsolationEnum() {} - - } - - /** * Class modeling transaction metadata. An instance of this class is used to * model a transaction in the {@link TransactionServer}. {@link ITx} --- 125,128 ---- Index: Tx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Tx.java,v retrieving revision 1.33 retrieving revision 1.34 diff -C2 -d -r1.33 -r1.34 *** Tx.java 22 Feb 2007 16:59:34 -0000 1.33 --- Tx.java 28 Feb 2007 13:59:09 -0000 1.34 *************** *** 52,60 **** import java.util.Map; import com.bigdata.isolation.IsolatedBTree; import com.bigdata.isolation.UnisolatedBTree; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IndexSegment; - import com.bigdata.objndx.ReadOnlyIndex; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MetadataIndex; --- 52,62 ---- import java.util.Map; + import com.bigdata.isolation.IIsolatedIndex; import com.bigdata.isolation.IsolatedBTree; + import com.bigdata.isolation.ReadOnlyIsolatedIndex; import com.bigdata.isolation.UnisolatedBTree; + import com.bigdata.objndx.BTree; import com.bigdata.objndx.IIndex; import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Bytes; import com.bigdata.scaleup.MetadataIndex; *************** *** 98,118 **** * up to the point where a "commit" or "abort" is _requested_ for the tx. * - * @todo Support read-committed transactions (data committed by _other_ - * transactions during the transaction will be visible within that - * transaction). Read-committed transactions do NOT permit writes (they - * are read-only). Prepare and commit are NOPs. This might be a distinct - * implementation sharing a common base class for handling the run state - * stuff, or just a distinct implementation altogether. The read-committed - * transaction is implemented by just reading against the named indices on - * the journal. However, since commits or overflows of the journal might - * invalidate the index objects we may have to setup a delegation - * mechanism that resolves the named index either on each operation or - * whenever the transaction receives notice that the index must be - * discarded. In order to NOT see in-progress writes, the read-committed - * transaction actually needs to dynamically resolve the most recent - * {@link ICommitRecord} and then use the named indices resolved from - * that. This suggests an event notification mechanism for commits so that - * we can get the commit record more cheaply. - * * @todo Support transactions where the indices isolated by the transactions are * {@link PartitionedIndex}es. --- 100,103 ---- *************** *** 143,187 **** * pre-conditions itself and exceptions being thrown from here if the * server failed to test the pre-conditions and they were not met - * - * @todo PREPARE operations must be serialized unless they are provably - * non-overlapping. This will require a handshake with either the - * {@link Journal} or (more likely) the {@link TransactionServer}. */ ! public class Tx implements IStore, ITx { - /* - * Text for error messages. - */ - final static String NOT_ACTIVE = "Not active"; - final static String NOT_PREPARED = "Transaction is not prepared"; - final static String NOT_COMMITTED = "Transaction is not committed"; - final static String IS_COMPLETE = "Transaction is complete"; - - /** - * The transaction uses the {@link Journal} for some handshaking in the - * commit protocol and to locate the named indices that it isolates. - */ - final protected Journal journal; - - /** - * The start startTime assigned to this transaction. - * <p> - * Note: Transaction {@link #startTime} and {@link #commitTimestamp}s - * are assigned by a global time service. The time service must provide - * unique times for transaction start and commit timestamps and for commit - * times for unisolated {@link Journal#commit()}s. - */ - final protected long startTime; - - /** - * The commit startTime assigned to this transaction. - */ - private long commitTimestamp; - - /** - * True iff the transaction is read only and will reject writes. - */ - final protected boolean readOnly; - /** * The historical {@link ICommitRecord} choosen as the ground state for this --- 128,134 ---- * pre-conditions itself and exceptions being thrown from here if the * server failed to test the pre-conditions and they were not met */ ! public class Tx extends AbstractTx implements IStore, ITx { /** * The historical {@link ICommitRecord} choosen as the ground state for this *************** *** 191,196 **** final private ICommitRecord commitRecord; - private RunState runState; - /** * A store used to hold write sets for read-write transactions (it is null --- 138,141 ---- *************** *** 210,219 **** /** * Indices isolated by this transactions. */ ! private Map<String, IIndex> btrees = new HashMap<String, IIndex>(); /** ! * Create a transaction starting the last committed state of the journal as ! * of the specified startTime. * * @param journal --- 155,173 ---- /** * Indices isolated by this transactions. + * + * @todo Note that this mapping could use weak value map as long as we + * retained the address of the metadata record in a hard reference map + * so that we could reload the index from the {@link #tmpStore}. I am + * not sure that any performance benefit would be realized by closing + * out indices for active transactions. It is possible that this could + * benefit or hurt. Many of the same benefits could be realized by + * having the {@link BTree} class automatically release large + * transient data structures after a period of disuse. */ ! private Map<String, IIsolatedIndex> btrees = new HashMap<String, IIsolatedIndex>(); /** ! * Create a transaction reading from the most recent committed state not ! * later than the specified startTime. * * @param journal *************** *** 221,257 **** * * @param startTime ! * The startTime, which MUST be assigned consistently based on a ! * {@link ITimestampService}. Note that a transaction does not ! * start up on all {@link Journal}s at the same time. Instead, ! * the transaction start startTime is assigned by a centralized ! * time service and then provided each time a transaction object ! * must be created for isolated on some {@link Journal}. * * @param readOnly * When true the transaction will reject writes and * {@link #prepare()} and {@link #commit()} will be NOPs. - * - * @exception IllegalStateException - * if the transaction state has been garbage collected. */ ! public Tx(Journal journal, long timestamp, boolean readOnly) { ! ! if (journal == null) ! throw new IllegalArgumentException(); ! ! this.journal = journal; ! ! this.startTime = timestamp; ! ! this.readOnly = readOnly; ! this.tmpStore = readOnly ? null : new TemporaryRawStore( ! Bytes.megabyte * 1, // initial in-memory size. ! Bytes.megabyte * 10, // maximum in-memory size. ! false // do NOT use direct buffers. ! ); - journal.activateTx(this); - /* * The commit record serving as the ground state for the indices --- 175,193 ---- * * @param startTime ! * The start time assigned to the transaction. Note that a ! * transaction does not start execution on all {@link Journal}s ! * at the same moment. Instead, the transaction start startTime ! * is assigned by a centralized time service and then provided ! * each time a transaction object must be created for isolatation ! * of resources accessible on some {@link Journal}. * * @param readOnly * When true the transaction will reject writes and * {@link #prepare()} and {@link #commit()} will be NOPs. */ ! public Tx(Journal journal, long startTime, boolean readOnly) { ! super(journal, startTime, readOnly); /* * The commit record serving as the ground state for the indices *************** *** 259,343 **** * transaction will be unable to isolate any indices). */ ! this.commitRecord = journal.getCommitRecord(timestamp); ! ! this.runState = RunState.ACTIVE; ! ! } ! ! /** ! * The hash code is based on the {@link #getStartTimestamp()}. ! * ! * @todo pre-compute this value if it is used much. ! */ ! final public int hashCode() { ! ! return Long.valueOf(startTime).hashCode(); ! ! } ! ! /** ! * True iff they are the same object or have the same start timestamp. ! * ! * @param o ! * Another transaction object. ! */ ! final public boolean equals(ITx o) { ! ! return this == o || startTime == o.getStartTimestamp(); ! ! } ! ! /** ! * The transaction identifier. ! * ! * @return The transaction identifier. ! * ! * @todo verify that this has the semantics of the transaction start time ! * and that the startTime is (must be) assigned by the same service ! * that assigns the {@link #getCommitTimestamp()}. ! */ ! final public long getStartTimestamp() { ! return startTime; } - - /** - * Return the commit timestamp assigned to this transaction. - * - * @return The commit timestamp assigned to this transaction. - * - * @exception IllegalStateException - * unless the transaction writable and {@link #isPrepared()} - * or {@link #isCommitted()}. - */ - final public long getCommitTimestamp() { - - if(readOnly) { - - throw new IllegalStateException(); - } - - switch(runState) { - case ACTIVE: - case ABORTED: - throw new IllegalStateException(); - case PREPARED: - case COMMITTED: - if(commitTimestamp == 0L) throw new AssertionError(); - return commitTimestamp; - } - - throw new AssertionError(); - - } - - public String toString() { - - return ""+startTime; - - } - /** * This method must be invoked any time a transaction completes ({@link #abort()}s --- 195,208 ---- * transaction will be unable to isolate any indices). */ ! this.commitRecord = journal.getCommitRecord(startTime); ! this.tmpStore = readOnly ? null : new TemporaryRawStore( ! Bytes.megabyte * 1, // initial in-memory size. ! Bytes.megabyte * 10, // maximum in-memory size. ! false // do NOT use direct buffers. ! ); } /** * This method must be invoked any time a transaction completes ({@link #abort()}s *************** *** 347,350 **** --- 212,217 ---- protected void releaseResources() { + super.releaseResources(); + /* * Release hard references to any named btrees isolated within this *************** *** 365,638 **** } ! /** ! * Prepare the transaction for a {@link #commit()}. ! * ! * @exception IllegalStateException ! * if the transaction is not active. If the transaction is ! * not complete, then it will be aborted. ! */ ! public void prepare() { ! ! if( ! isActive() ) { ! ! if( ! isComplete() ) { ! ! abort(); ! ! } ! ! throw new IllegalStateException(NOT_ACTIVE); ! ! } ! ! if (!readOnly) { ! ! /* ! * The commit startTime is assigned when we prepare the transaction ! * since the the commit protocol does not permit unisolated writes ! * once a transaction begins to prepar until the transaction has ! * either committed or aborted (if such writes were allowed then we ! * would have to re-validate the transaction in order to enforce ! * serializability). ! * ! * @todo resolve this against a service in a manner that will ! * support a distributed database commit protocol. ! */ ! commitTimestamp = journal.nextTimestamp(); ! ! try { ! ! /* ! * Validate against the current state of the various indices ! * on write the transaction has written. ! */ ! ! if (!validateWriteSets()) { ! ! abort(); ! ! throw new ValidationError(); ! ! } ! ! } catch( ValidationError ex) { ! ! throw ex; ! ! } catch (Throwable t) { ! ! abort(); ! ! throw new RuntimeException("Unexpected error: " + t, t); ! ! } ! ! } ! ! journal.prepared(this); ! ! runState = RunState.PREPARED; ! ! } ! ! /** ! * <p> ! * Commit the transaction. ! * </p> ! * <p> ! * Note: You MUST {@link #prepare()} a transaction before you ! * {@link #commit()} that transaction. This requirement exists as a ! * placeholder for a 2-phase commit protocol for use with distributed ! * transactions. ! * </p> ! * ! * @return The commit timestamp or <code>0L</code> if the transaction was ! * read-only. ! * ! * @exception IllegalStateException ! * if the transaction has not been ! * {@link #prepare() prepared}. If the transaction is not ! * already complete, then it is aborted. ! */ ! public long commit() { ! ! if( ! isPrepared() ) { ! ! if( ! isComplete() ) { ! ! abort(); ! ! } ! ! throw new IllegalStateException(NOT_PREPARED); ! ! } ! ! try { ! ! if(!readOnly) { ! ! /* ! * Merge each isolated index into the global scope. This also ! * marks the slots used by the versions written by the ! * transaction as 'committed'. This operation MUST succeed (at a ! * logical level) since we have already validated (neither ! * read-write nor write-write conflicts exist). ! * ! * @todo Non-transactional operations on the global scope should ! * be either disallowed entirely or locked out during the ! * prepare-commit protocol when using transactions since they ! * (a) could invalidate the pre-condition for the merge; and (b) ! * uncommitted changes would be discarded if the merge operation ! * fails. One solution is to use batch operations or group ! * commit mechanism to dynamically create transactions from ! * unisolated operations. ! */ ! ! mergeOntoGlobalState(); ! ! // Atomic commit. ! journal.commit(this); ! ! } ! ! runState = RunState.COMMITTED; ! ! journal.completedTx(this); ! ! } catch( Throwable t) { ! ! /* ! * If the operation fails then we need to discard any changes that ! * have been merged down into the global state. Failure to do this ! * will result in those changes becoming restart-safe when the next ! * transaction commits! ! * ! * Discarding registered committers is legal if we are observing ! * serializability; that is, if no one writes on the global state ! * for a restart-safe btree except mergeOntoGlobalState(). When this ! * constraint is observed it is impossible for there to be ! * uncommitted changes when we begin to merge down onto the store ! * and any changes may simply be discarded. ! * ! * Note: we can not simply reload the current root block (or reset ! * the nextOffset to be assigned) since concurrent transactions may ! * be writing non-restart safe data on the store in their own ! * isolated btrees. ! */ ! ! journal.abort(); ! ! abort(); ! ! throw new RuntimeException( t ); ! ! } finally { ! ! releaseResources(); ! ! } ! ! return readOnly ? 0L : getCommitTimestamp(); ! ! } ! ! /** ! * Abort the transaction. ! * ! * @exception IllegalStateException ! * if the transaction is already complete. ! */ ! public void abort() { ! ! if (isComplete()) ! throw new IllegalStateException(IS_COMPLETE); ! ! try { ! ! runState = RunState.ABORTED; ! ! journal.completedTx(this); ! ! } finally { ! ! releaseResources(); ! ! } ! ! } ! ! final public boolean isReadOnly() { ! ! return readOnly; ! ! } ! ! /** ! * A transaction is "active" when it is created and remains active until it ! * prepares or aborts. An active transaction accepts READ, WRITE, DELETE, ! * PREPARE and ABORT requests. ! * ! * @return True iff the transaction is active. ! */ ! final public boolean isActive() { ! ! return runState == RunState.ACTIVE; ! ! } ! ! /** ! * A transaction is "prepared" once it has been successfully validated and ! * has fulfilled its pre-commit contract for a multi-stage commit protocol. ! * An prepared transaction accepts COMMIT and ABORT requests. ! * ! * @return True iff the transaction is prepared to commit. ! */ ! final public boolean isPrepared() { ! ! return runState == RunState.PREPARED; ! ! } ! ! /** ! * A transaction is "complete" once has either committed or aborted. A ! * completed transaction does not accept any requests. ! * ! * @return True iff the transaction is completed. ! */ ! final public boolean isComplete() { ! ! return runState == RunState.COMMITTED || runState == RunState.ABORTED; ! ! } ! ! /** ! * A transaction is "committed" iff it has successfully committed. A ! * committed transaction does not accept any requests. ! * ! * @return True iff the transaction is committed. ! */ ! final public boolean isCommitted() { ! ! return runState == RunState.COMMITTED; ! ! } ! ! /** ! * A transaction is "aborted" iff it has successfully aborted. An aborted ! * transaction does not accept any requests. ! * ! * @return True iff the transaction is aborted. ! */ ! final public boolean isAborted() { ! ! return runState == RunState.ABORTED; ! ! } ! ! /** ! * Validate all isolated btrees written on by this transaction. ! */ ! private boolean validateWriteSets() { assert ! readOnly; --- 232,236 ---- } ! protected boolean validateWriteSets() { assert ! readOnly; *************** *** 657,665 **** */ ! Iterator<Map.Entry<String,IIndex>> itr = btrees.entrySet().iterator(); while(itr.hasNext()) { ! Map.Entry<String, IIndex> entry = itr.next(); String name = entry.getKey(); --- 255,263 ---- */ ! Iterator<Map.Entry<String,IIsolatedIndex>> itr = btrees.entrySet().iterator(); while(itr.hasNext()) { ! Map.Entry<String, IIsolatedIndex> entry = itr.next(); String name = entry.getKey(); *************** *** 682,698 **** } ! /** ! * Merge down the write set from all isolated btrees written on by this ! * transactions into the corresponding btrees in the global state. ! */ ! private void mergeOntoGlobalState() { assert ! readOnly; ! Iterator<Map.Entry<String,IIndex>> itr = btrees.entrySet().iterator(); while(itr.hasNext()) { ! Map.Entry<String, IIndex> entry = itr.next(); String name = entry.getKey(); --- 280,294 ---- } ! protected void mergeOntoGlobalState() { assert ! readOnly; ! super.mergeOntoGlobalState(); ! ! Iterator<Map.Entry<String,IIsolatedIndex>> itr = btrees.entrySet().iterator(); while(itr.hasNext()) { ! Map.Entry<String, IIsolatedIndex> entry = itr.next(); String name = entry.getKey(); *************** *** 747,751 **** * on each call within the same transaction. */ ! IIndex index = btrees.get(name); if(commitRecord==null) { --- 343,347 ---- * on each call within the same transaction. */ ! IIsolatedIndex index = btrees.get(name); if(commitRecord==null) { *************** *** 777,781 **** if(readOnly) { ! index = new ReadOnlyIndex(src); } else { --- 373,377 ---- if(readOnly) { ! index = new ReadOnlyIsolatedIndex(src); } else { Index: Journal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Journal.java,v retrieving revision 1.56 retrieving revision 1.57 diff -C2 -d -r1.56 -r1.57 *** Journal.java 22 Feb 2007 16:59:34 -0000 1.56 --- Journal.java 28 Feb 2007 13:59:10 -0000 1.57 *************** *** 107,114 **** * FIXME Priority items are: * <ol> ! * <li> Transaction isolation.</li> * <li> Concurrent load for RDFS w/o rollback.</li> * <li> Scale out database (automatic re-partitioning of indices and processing * of deletion markers).</li> * <li> Distributed database protocols.</li> * <li> Segment server (mixture of journal server and read-optimized database --- 107,122 ---- * FIXME Priority items are: * <ol> ! * <li> Transaction isolation (correctness tests, isolatedbtree tests, fused ! * view tests).</li> * <li> Concurrent load for RDFS w/o rollback.</li> + * <li> Group commit for higher transaction throughput.<br> + * Note: TPS is basically constant for a given combination of the buffer mode + * and whether or not commits are forced to disk. This means that the #of + * clients is not a strong influence on performance. The big wins are Transient + * and Force := No since neither conditions synchs to disk. This suggests that + * the big win for TPS throughput is going to be group commit. </li> * <li> Scale out database (automatic re-partitioning of indices and processing * of deletion markers).</li> + * <li> AIO for the Direct and Disk modes.</li> * <li> Distributed database protocols.</li> * <li> Segment server (mixture of journal server and read-optimized database *************** *** 123,130 **** * <li> Architecture using queues from GOM to journal/database segment server * supporting both embedded and remote scenarios.</li> - * <li>Expand on latency and decision criteria for notifying clients when pages - * or objects of interest have been modified by another transaction that has - * committed (or in the case of distributed workers on a single transaction, its - * peers).</li> * </ol> * --- 131,134 ---- *************** *** 937,941 **** public long commit() { ! return commit(null); } --- 941,945 ---- public long commit() { ! return commitNow(nextTimestamp()); } *************** *** 953,963 **** * no data to commit. */ ! protected long commit(Tx tx) { assertOpen(); - final long commitTimestamp = (tx == null ? timestampFactory - .nextTimestamp() : tx.getCommitTimestamp()); - /* * First, run each of the committers accumulating the updated root --- 957,964 ---- * no data to commit. */ ! protected long commitNow(long commitTime) { assertOpen(); ... [truncated message content] |
From: Bryan T. <tho...@us...> - 2007-02-28 13:59:14
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/isolation In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv19761/src/java/com/bigdata/isolation Added Files: ReadOnlyIsolatedIndex.java Log Message: Adds support for read-committed transactions. --- NEW FILE: ReadOnlyIsolatedIndex.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 27, 2007 */ package com.bigdata.isolation; import com.bigdata.objndx.ReadOnlyIndex; /** * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class ReadOnlyIsolatedIndex extends ReadOnlyIndex implements IIsolatedIndex { /** * @param src */ public ReadOnlyIsolatedIndex(IIsolatableIndex src) { super(src); } } |
From: Bryan T. <tho...@us...> - 2007-02-28 13:59:13
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv19761/src/java/com/bigdata/scaleup Modified Files: PartitionedJournal.java Log Message: Adds support for read-committed transactions. Index: PartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionedJournal.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** PartitionedJournal.java 22 Feb 2007 16:59:35 -0000 1.10 --- PartitionedJournal.java 28 Feb 2007 13:59:10 -0000 1.11 *************** *** 59,62 **** --- 59,63 ---- import com.bigdata.journal.IJournal; import com.bigdata.journal.IRootBlockView; + import com.bigdata.journal.IsolationEnum; import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr.Entry; *************** *** 1214,1227 **** } - public long newReadCommittedTx() { - return slave.newReadCommittedTx(); - } - public long newTx() { return slave.newTx(); } ! public long newTx(boolean readOnly) { ! return slave.newTx(readOnly); } --- 1215,1224 ---- } public long newTx() { return slave.newTx(); } ! public long newTx(IsolationEnum level) { ! return slave.newTx(level); } |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:46
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/rawstore In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/test/com/bigdata/rawstore Modified Files: AbstractRawStoreTestCase.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: AbstractRawStoreTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/rawstore/AbstractRawStoreTestCase.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** AbstractRawStoreTestCase.java 21 Feb 2007 20:17:20 -0000 1.2 --- AbstractRawStoreTestCase.java 22 Feb 2007 16:59:34 -0000 1.3 *************** *** 162,165 **** --- 162,167 ---- } + + store.closeAndDelete(); } *************** *** 185,188 **** --- 187,192 ---- } + store.closeAndDelete(); + } *************** *** 205,209 **** } ! } --- 209,215 ---- } ! ! store.closeAndDelete(); ! } *************** *** 232,236 **** } ! } --- 238,244 ---- } ! ! store.closeAndDelete(); ! } *************** *** 257,261 **** } ! } --- 265,271 ---- } ! ! store.closeAndDelete(); ! } *************** *** 368,371 **** --- 378,383 ---- assertEquals(expected.length,actual.limit()); + store.closeAndDelete(); + } *************** *** 424,428 **** assertEquals(expected.length, actual2.limit()); } ! } --- 436,442 ---- assertEquals(expected.length, actual2.limit()); } ! ! store.closeAndDelete(); ! } *************** *** 804,807 **** --- 818,823 ---- assertEquals(expected2,store.read(addr1)); + store.closeAndDelete(); + } *************** *** 856,859 **** --- 872,877 ---- } + store.closeAndDelete(); + } *************** *** 915,919 **** } ! } --- 933,939 ---- } ! ! store.closeAndDelete(); ! } *************** *** 968,971 **** --- 988,995 ---- // } + /** + * Note: This will leave a test file around each time since we can + * not really call closeAndDelete() when we are testing close(). + */ public void test_close() { *************** *** 988,992 **** System.err.println("Ignoring expected exception: "+ex); } ! } --- 1012,1016 ---- System.err.println("Ignoring expected exception: "+ex); } ! } |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:42
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/test/com/bigdata/scaleup Modified Files: TestMetadataIndex.java TestPartitionedJournal.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: TestMetadataIndex.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestMetadataIndex.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** TestMetadataIndex.java 17 Feb 2007 21:34:21 -0000 1.6 --- TestMetadataIndex.java 22 Feb 2007 16:59:35 -0000 1.7 *************** *** 74,78 **** import com.bigdata.rawstore.IRawStore; import com.bigdata.rawstore.SimpleMemoryRawStore; - import com.bigdata.scaleup.MetadataIndex.MetadataIndexMetadata; /** --- 74,77 ---- *************** *** 277,280 **** --- 276,281 ---- assertEquals(part1,md.get(key1)); assertEquals(part2,md.get(key2)); + + store.closeAndDelete(); } *************** *** 672,689 **** final Properties properties = new Properties(); - final File journalFile = new File(getName()+".jnl"); - - if(journalFile.exists() && !journalFile.delete() ) { - - fail("Could not delete file: "+journalFile.getAbsoluteFile()); - - } - properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); ! properties.setProperty(Options.FILE,journalFile.toString()); properties.setProperty(Options.DELETE_ON_CLOSE,"true"); properties.setProperty(Options.SEGMENT,"0"); --- 673,684 ---- final Properties properties = new Properties(); properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); ! properties.setProperty(Options.CREATE_TEMP_FILE,"true"); properties.setProperty(Options.DELETE_ON_CLOSE,"true"); + properties.setProperty(Options.DELETE_ON_EXIT,"true"); + properties.setProperty(Options.SEGMENT,"0"); *************** *** 963,969 **** System.err.println("End of stress test: ntrial="+ntrials+", nops="+nops); - // delete the journal file afterwards. - if(journalFile.exists()) journalFile.delete(); - } --- 958,961 ---- Index: TestPartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/scaleup/TestPartitionedJournal.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestPartitionedJournal.java 13 Feb 2007 23:01:12 -0000 1.3 --- TestPartitionedJournal.java 22 Feb 2007 16:59:35 -0000 1.4 *************** *** 66,69 **** --- 66,72 ---- /** + * @todo update how we create and get rid of the temporary files created by the + * tests. + * * @todo rather than writing this test suite directly, we mostly want to apply * the existing proxy test suites for {@link Journal}. *************** *** 105,109 **** protected void deleteTestFiles() { ! NameAndExtensionFilter filter = new NameAndExtensionFilter(getName(),PartitionedJournal.JNL); File[] files = filter.getFiles(); --- 108,112 ---- protected void deleteTestFiles() { ! NameAndExtensionFilter filter = new NameAndExtensionFilter(getName(),Options.JNL); File[] files = filter.getFiles(); |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:40
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/java/com/bigdata/journal Modified Files: TransientBufferStrategy.java Tx.java MappedBufferStrategy.java ITransactionManager.java BlockWriteCache.java IJournal.java DiskBackedBufferStrategy.java TemporaryRawStore.java Options.java DiskOnlyStrategy.java FileMetadata.java Journal.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: IJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/IJournal.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** IJournal.java 21 Feb 2007 20:17:21 -0000 1.6 --- IJournal.java 22 Feb 2007 16:59:34 -0000 1.7 *************** *** 50,54 **** import java.util.Properties; - import com.bigdata.objndx.BTree; import com.bigdata.objndx.IIndex; import com.bigdata.rawstore.IRawStore; --- 50,53 ---- *************** *** 95,97 **** --- 94,127 ---- public IIndex getIndex(String name); + /** + * Return the named index isolated by the transaction having the + * specified start time. + * + * @param startTime The transaction start time. + */ + public IIndex getIndex(String name, long startTime); + + /** + * Return a read-only view of the current root block. + * + * @return The current root block. + */ + public IRootBlockView getRootBlockView(); + + /** + * Return the {@link ICommitRecord} for the most recent committed state + * whose commit timestamp is less than or equal to <i>timestamp</i>. This + * is used by a {@link Tx transaction} to locate the committed state that is + * the basis for its operations. + * + * @param timestamp + * Typically, the timestamp assigned to a transaction. + * + * @return The {@link ICommitRecord} for the most recent committed state + * whose commit timestamp is less than or equal to <i>timestamp</i> + * -or- <code>null</code> iff there are no {@link ICommitRecord}s + * that satisify the probe. + */ + public ICommitRecord getCommitRecord(long commitTime); + } Index: DiskOnlyStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/DiskOnlyStrategy.java,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -d -r1.18 -r1.19 *** DiskOnlyStrategy.java 21 Feb 2007 20:17:21 -0000 1.18 --- DiskOnlyStrategy.java 22 Feb 2007 16:59:34 -0000 1.19 *************** *** 318,321 **** --- 318,333 ---- } + public void closeAndDelete() { + + close(); + + if(!file.delete()) { + + System.err.println("WARN: Could not delete: "+file.getAbsolutePath()); + + } + + } + public void deleteFile() { Index: TemporaryRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TemporaryRawStore.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** TemporaryRawStore.java 21 Feb 2007 20:17:21 -0000 1.1 --- TemporaryRawStore.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 153,156 **** --- 153,164 ---- } + public File getFile() { + + if(!open) throw new IllegalStateException(); + + return buf.getFile(); + + } + /** * Close the store and delete the associated file, if any. *************** *** 162,168 **** open = false; ! buf.close(); ! buf.deleteFile(); buf = null; --- 170,178 ---- open = false; ! // buf.close(); ! // ! // buf.deleteFile(); ! buf.closeAndDelete(); buf = null; *************** *** 170,173 **** --- 180,190 ---- } + public void closeAndDelete() { + + // Close already deletes the backing file. + close(); + + } + public void force(boolean metadata) { *************** *** 247,251 **** int segmentId = 0; ! File file = null; // request a unique filename. /* --- 264,278 ---- int segmentId = 0; ! File file; ! ! try { ! ! file = File.createTempFile("bigdata", ".tmpStore"); ! ! } catch (IOException ex) { ! ! throw new RuntimeException(ex); ! ! } /* *************** *** 253,259 **** * twice the data in the in-memory buffer. */ ! long initialExtent = FileMetadata.headerSize0 + tmp.getUserExtent() * 2; ! final boolean create = true; final boolean readOnly = false; --- 280,292 ---- * twice the data in the in-memory buffer. */ ! final long initialExtent = FileMetadata.headerSize0 + tmp.getUserExtent() * 2; ! final long maximumDiskExtent = Bytes.gigabyte32 * 2; ! ! final boolean create = false; ! ! final boolean isEmptyFile = true; ! ! final boolean deleteOnExit = true; final boolean readOnly = false; *************** *** 267,274 **** FileMetadata fileMetadata = new FileMetadata(segmentId, file, BufferMode.Disk, useDirectBuffers, initialExtent, ! create, readOnly, forceWrites); ! ! // Mark the file for deletion on exit. ! fileMetadata.file.deleteOnExit(); // Open the disk-based store file. --- 300,305 ---- FileMetadata fileMetadata = new FileMetadata(segmentId, file, BufferMode.Disk, useDirectBuffers, initialExtent, ! maximumDiskExtent, create, isEmptyFile, deleteOnExit, ! readOnly, forceWrites); // Open the disk-based store file. Index: FileMetadata.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/FileMetadata.java,v retrieving revision 1.13 retrieving revision 1.14 diff -C2 -d -r1.13 -r1.14 *** FileMetadata.java 21 Feb 2007 20:17:21 -0000 1.13 --- FileMetadata.java 22 Feb 2007 16:59:34 -0000 1.14 *************** *** 159,165 **** * The unique segment identifier. * @param file ! * The name of the file to be opened - when null, a file with a ! * unique name will be created in the default temporary ! * directory. * @param bufferMode * The {@link BufferMode}. --- 159,163 ---- * The unique segment identifier. * @param file ! * The name of the file to be opened. * @param bufferMode * The {@link BufferMode}. *************** *** 170,176 **** * {@link BufferMode#Disk} and {@link BufferMode#Mapped} modes. * @param initialExtent ! * The initial extent of the file iff a new file is created. * @param create * When true, the file is created if it does not exist. * @param readOnly * When true, the file is opened in a read-only mode and it is an --- 168,188 ---- * {@link BufferMode#Disk} and {@link BufferMode#Mapped} modes. * @param initialExtent ! * The initial extent of the journal. The size of the journal is ! * automatically increased up to the <i>maximumExtent</i> on an ! * as necessary basis. ! * @param maximumExtent ! * The maximum extent of the journal before it will ! * {@link Journal#overflow()}. * @param create * When true, the file is created if it does not exist. + * @param isEmptyFile + * This flag must be set when the temporary file mechanism is + * used to create a new temporary file otherwise an empty file is + * treated as an error since it does not contain valid root + * blocks. + * @param deleteOnExit + * When set, a <em>new</em> file will be marked for deletion + * when the VM exits. This may be used as part of a temporary + * store strategy. * @param readOnly * When true, the file is opened in a read-only mode and it is an *************** *** 186,194 **** */ FileMetadata(int segmentId, File file, BufferMode bufferMode, ! boolean useDirectBuffers, long initialExtent, boolean create, boolean readOnly, ForceEnum forceWrites) throws RuntimeException { ! // if (file == null) ! // throw new IllegalArgumentException(); if (bufferMode == null) --- 198,207 ---- */ FileMetadata(int segmentId, File file, BufferMode bufferMode, ! boolean useDirectBuffers, long initialExtent, long maximumExtent, ! boolean create, boolean isEmptyFile, boolean deleteOnExit, boolean readOnly, ForceEnum forceWrites) throws RuntimeException { ! if (file == null) ! throw new IllegalArgumentException(); if (bufferMode == null) *************** *** 226,231 **** this.readOnly = readOnly; ! this.exists = file != null && file.exists(); ! if (exists) { --- 239,246 ---- this.readOnly = readOnly; ! this.exists = !isEmptyFile && file.exists(); ! ! this.file = file; ! if (exists) { *************** *** 243,247 **** } ! if ( ! create ) { throw new RuntimeException("File does not exist and '" --- 258,262 ---- } ! if ( ! create && ! isEmptyFile ) { throw new RuntimeException("File does not exist and '" *************** *** 251,274 **** } - if (file == null) { - - try { - - file = File.createTempFile("bigdata", ".store"); - - } catch (IOException ex) { - - throw new RuntimeException(ex); - - } - - } - System.err.println("Will create file: " + file.getAbsoluteFile()); } - this.file = file; - try { --- 266,273 ---- *************** *** 460,468 **** */ /* * Set the initial extent. */ ! this.extent = initialExtent; this.userExtent = extent - headerSize0; --- 459,474 ---- */ + // Mark the file for deletion on exit. + if(deleteOnExit) file.deleteOnExit(); + /* * Set the initial extent. + * + * Note: since a mapped file CAN NOT be extended, we pre-extend + * it to its maximum extent here. */ ! this.extent = (bufferMode == BufferMode.Mapped ? maximumExtent ! : initialExtent); this.userExtent = extent - headerSize0; Index: ITransactionManager.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ITransactionManager.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ITransactionManager.java 20 Feb 2007 00:27:03 -0000 1.1 --- ITransactionManager.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 56,60 **** * @version $Id$ */ ! public interface ITransactionManager { /** --- 56,60 ---- * @version $Id$ */ ! public interface ITransactionManager extends ITimestampService { /** Index: Options.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Options.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** Options.java 21 Feb 2007 20:17:21 -0000 1.9 --- Options.java 22 Feb 2007 16:59:34 -0000 1.10 *************** *** 44,47 **** --- 44,48 ---- package com.bigdata.journal; + import java.io.File; import java.nio.channels.FileChannel; import java.util.Properties; *************** *** 67,72 **** /** ! * <code>bufferMode</code> - One of "transient", "direct", "mapped", ! * or "disk". See {@link BufferMode} for more information about each * mode. * --- 68,73 ---- /** ! * <code>bufferMode</code> - One of "Transient", "Direct", "Mapped", ! * or "Disk". See {@link BufferMode} for more information about each * mode. * *************** *** 201,215 **** */ public static final String DOUBLE_SYNC = "doubleSync"; /** ! * <code>deleteOnClose</code> - This optional boolean option causes ! * the journal file to be deleted when the journal is closed (default ! * <em>false</em>). This option is used by the test suites to keep ! * down the disk burden of the tests and MUST NOT be used with live ! * data. */ public final static String DELETE_ON_CLOSE = "deleteOnClose"; /** * The default for {@link #USE_DIRECT_BUFFERS}. */ --- 202,246 ---- */ public static final String DOUBLE_SYNC = "doubleSync"; + + /** + * <code>createTempFile</code> - This boolean option causes a new file to + * be created using the + * {@link File#createTempFile(String, String, File) temporary file mechanism}. + * If {@link #DELETE_ON_EXIT} is also specified, then the temporary file + * will be {@link File#deleteOnExit() marked for deletion} when the JVM + * exits. This option is often used when preparing a journal for a unit + * test. The default temporary directory is used unless it is overriden by + * the {@link #TMP_DIR} option. + */ + public final static String CREATE_TEMP_FILE = "createTempFile"; /** ! * <code>deleteOnClose</code> - This boolean option causes the journal ! * file to be deleted when the journal is closed (default <em>false</em>). ! * This option is used by the some test suites (those that do not test ! * restart safety) to keep down the disk burden of the tests and MUST NOT be ! * used with restart-safe data. */ public final static String DELETE_ON_CLOSE = "deleteOnClose"; + + /** + * <code>deleteOnExit</code> - This boolean option causes the journal file + * to be deleted when the VM exits (default <em>false</em>). This option + * is used by the test suites to keep down the disk burden of the tests + * and MUST NOT be used with restart-safe data. + */ + public final static String DELETE_ON_EXIT = "deleteOnExit"; /** + * <code>tmp.dir</code> - The property whose value is the name of the + * directory in which temporary files will be created. When not specified + * the default is governed by the value of the System property named + * <code>java.io.tmpdir</code>. There are several kinds of temporary + * files that can be created, including temporary journals, intermediate + * files from an index merge process, etc. + */ + public static final String TMP_DIR = "tmp.dir"; + + /** * The default for {@link #USE_DIRECT_BUFFERS}. */ *************** *** 259,261 **** --- 290,307 ---- public final static boolean DEFAULT_DELETE_ON_CLOSE = false; + /** + * The default for the {@link #DELETE_ON_EXIT} option. + */ + public final static boolean DEFAULT_DELETE_ON_EXIT = false; + + /** + * The default for the {@link #CREATE_TEMP_FILE} option. + */ + public final static boolean DEFAULT_CREATE_TEMP_FILE = false; + + /** + * The recommened extension for journal files. + */ + public static final String JNL = ".jnl"; + } Index: Journal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Journal.java,v retrieving revision 1.55 retrieving revision 1.56 diff -C2 -d -r1.55 -r1.56 *** Journal.java 21 Feb 2007 20:17:21 -0000 1.55 --- Journal.java 22 Feb 2007 16:59:34 -0000 1.56 *************** *** 49,52 **** --- 49,53 ---- import java.io.File; + import java.io.IOException; import java.nio.ByteBuffer; import java.util.Map; *************** *** 64,140 **** import com.bigdata.objndx.BTreeMetadata; import com.bigdata.objndx.IIndex; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; import com.bigdata.util.concurrent.DaemonThreadFactory; /** * <p> ! * An append-only persistence capable data structure supporting atomic commit, ! * scalable named indices, and transactions. Writes are logically appended to ! * the journal to minimize disk head movement. ! * </p> ! * ! * <p> ! * The journal provides for fast migration of committed data to a read-optimized ! * database. Data may be migrated as soon as a transaction commits and only the ! * most recent state for any datum need be migrated. Note that the criteria for ! * migration typically are met before the slots occupied by those objects may be ! * released. This is because a concurrent transaction must be permitted to read ! * from the committed state of the database at the time that the transaction was ! * begun. ! * </p> ! * <p> ! * The journal is a ring buffer divised of slots. The slot size and initial ! * extent are specified when the journal is provisioned. Objects are written ! * onto free slots. Slots are released once no active transaction can read from ! * that slots. In this way, the journal buffers consistent histories. ! * </p> ! * <p> ! * The journal maintains two indices: ! * <ol> ! * <li>An object index from int32 object identifier to a slot allocation; and ! * </li> ! * <li>An allocation index from slot to a slot status record.</li> ! * </ol> ! * These data structures are b+trees. The nodes of the btrees are stored in the ! * journal. These data structures are fully isolated. The roots of the indices ! * are choosen based on the transaction begin time. Changes result in a copy on ! * write. Writes percolate up the index nodes to the root node. ! * </p> ! * <p> ! * Commit processing. The journal also maintains two root blocks. Commit updates ! * the root blocks using the Challis algorithm. (The root blocks are updated ! * using an alternating pattern and timestamps are recorded at the head and tail ! * of each root block to detect partial writes.) When the journal is backed by a ! * disk file, the file is flushed to disk on commit. ! * </p> ! * <p> ! * A journal may be used without a database file as an object database. The ! * design is very efficient for absorbing writes, but read operations are not ! * optimized. Overall performance will degrade as the journal size increases ! * beyond the limits of physical memory and as the object index depth increases. * </p> * <p> ! * A journal and a database file form a logical segment in the bigdata ! * distributed database architecture. In bigdata, the segment size is generally ! * limited to a few hundred megabytes. At this scale, the journal and database ! * may both be wired into memory for the fastest performance. If memory ! * constraits are tighted, these files may be memory-mapped or used as a ! * disk-base data structures. A single journal may buffer for multiple copies of ! * the same database segment or journals may be chained for redundancy. * </p> * <p> ! * Very large objects should be handled by specially provisioned database files ! * using large pages and a "never overwrite" strategy. * </p> * <p> ! * Note: This class is NOT thread-safe. Instances of this class MUST use a ! * single-threaded context. That context is the single-threaded journal server ! * API. The journal server may be either embedded (in which case objects are ! * migrated to the server using FIFO queues) or networked (in which case the ! * journal server exposes a non-blocking service with a single thread for reads, ! * writes and deletes on the journal). (However, note transaction processing MAY ! * be concurrent since the write set of a transaction is written on a ! * {@link TemporaryRawStore}.) * </p> * --- 65,103 ---- import com.bigdata.objndx.BTreeMetadata; import com.bigdata.objndx.IIndex; + import com.bigdata.objndx.IndexSegment; import com.bigdata.rawstore.Addr; import com.bigdata.rawstore.Bytes; + import com.bigdata.scaleup.PartitionedJournal.Options; import com.bigdata.util.concurrent.DaemonThreadFactory; /** * <p> ! * The {@link Journal} is an append-only persistence capable data structure ! * supporting atomic commit, named indices, and transactions. Writes are ! * logically appended to the journal to minimize disk head movement. * </p> * <p> ! * Commit processing. The journal maintains two root blocks. Commit updates the ! * root blocks using the Challis algorithm. (The root blocks are updated using ! * an alternating pattern and timestamps are recorded at the head and tail of ! * each root block to detect partial writes.) When the journal is backed by a ! * disk file, the data ! * {@link Options#FORCE_ON_COMMIT optionally flushed to disk on commit}. If ! * desired, the writes may be flushed before the root blocks are updated to ! * ensure that the writes are not reordered - see {@link Options#DOUBLE_SYNC}. * </p> * <p> ! * Note: This class does NOT provide a thread-safe implementation of the ! * {@link IRawStore} API. Writes on the store MUST be serialized. Transaction ! * that write on the store are automatically serialized by {@link #commit(long)}. * </p> * <p> ! * Note: transaction processing MAY occur be concurrent since the write set of a ! * each transaction is written on a distinct {@link TemporaryRawStore}. ! * However, each transaction is NOT thread-safe and MUST NOT be executed by more ! * than one concurrent thread. Typically, a thread pool of some fixed size is ! * created and transctions are assigned to threads when they start on the ! * journal. If and as necessary, the {@link TemporaryRawStore} will spill from ! * memory onto disk allowing scalable transactions (up to 2G per transactions). * </p> * *************** *** 145,149 **** * <ol> * <li> Transaction isolation.</li> ! * <li> Scale out database (automatic re-partitioning of indices).</li> * <li> Distributed database protocols.</li> * <li> Segment server (mixture of journal server and read-optimized database --- 108,114 ---- * <ol> * <li> Transaction isolation.</li> ! * <li> Concurrent load for RDFS w/o rollback.</li> ! * <li> Scale out database (automatic re-partitioning of indices and processing ! * of deletion markers).</li> * <li> Distributed database protocols.</li> * <li> Segment server (mixture of journal server and read-optimized database *************** *** 186,193 **** * detected. * - * FIXME Write tests for writable and read-only fully isolated transactions, - * including the non-duplication of per-tx data structures (e.g., you always get - * the same object back when you ask for an isolated named index). - * * @todo I need to revisit the assumptions for very large objects in the face of * the recent / planned redesign. I expect that using an index with a key --- 151,154 ---- *************** *** 195,198 **** --- 156,167 ---- * limited to 32k or so. Writes on such indices should probably be * directed to a journal using a disk-only mode. + * + * @todo Checksums and/or record compression are currently handled on a per-{@link BTree} + * or other persistence capable data structure basis. It is nice to be + * able to choose for which indices and when ( {@link Journal} vs + * {@link IndexSegment}) to apply these algorithms. However, it might be + * nice to factor their application out a bit into a layered api - as long + * as the right layering is correctly re-established on load of the + * persistence data structure. */ public class Journal implements IJournal { *************** *** 228,243 **** /** ! * The implementation logic for the current {@link BufferMode}. */ ! final IBufferStrategy _bufferStrategy; ! /** ! * The service used to generate commit timestamps. ! * ! * @todo parameterize using {@link Options} so that we can resolve a ! * low-latency service for use with a distributed database commit ! * protocol. */ ! protected final ITimestampService timestampFactory = LocalTimestampService.INSTANCE; /** --- 197,208 ---- /** ! * The directory that should be used for temporary files. */ ! final public File tmpDir; ! /** ! * The implementation logic for the current {@link BufferMode}. */ ! final IBufferStrategy _bufferStrategy; /** *************** *** 324,329 **** boolean useDirectBuffers = Options.DEFAULT_USE_DIRECT_BUFFERS; boolean create = Options.DEFAULT_CREATE; ! boolean readOnly = Options.DEFAULT_READ_ONLY; boolean deleteOnClose = Options.DEFAULT_DELETE_ON_CLOSE; ForceEnum forceWrites = Options.DEFAULT_FORCE_WRITES; ForceEnum forceOnCommit = Options.DEFAULT_FORCE_ON_COMMIT; --- 289,297 ---- boolean useDirectBuffers = Options.DEFAULT_USE_DIRECT_BUFFERS; boolean create = Options.DEFAULT_CREATE; ! boolean createTempFile = Options.DEFAULT_CREATE_TEMP_FILE; ! boolean isEmptyFile = false; boolean deleteOnClose = Options.DEFAULT_DELETE_ON_CLOSE; + boolean deleteOnExit = Options.DEFAULT_DELETE_ON_EXIT; + boolean readOnly = Options.DEFAULT_READ_ONLY; ForceEnum forceWrites = Options.DEFAULT_FORCE_WRITES; ForceEnum forceOnCommit = Options.DEFAULT_FORCE_ON_COMMIT; *************** *** 336,339 **** --- 304,308 ---- this.properties = properties = (Properties) properties.clone(); + // this.properties = properties; /* *************** *** 426,429 **** --- 395,442 ---- /* + * "createTempFile" + */ + + val = properties.getProperty(Options.CREATE_TEMP_FILE); + + if (val != null) { + + createTempFile = Boolean.parseBoolean(val); + + if(createTempFile) { + + create = false; + + isEmptyFile = true; + + } + + } + + // "tmp.dir" + val = properties.getProperty(Options.TMP_DIR); + + tmpDir = val == null ? new File(System.getProperty("java.io.tmpdir")) + : new File(val); + + if (!tmpDir.exists()) { + + if (!tmpDir.mkdirs()) { + + throw new RuntimeException("Could not create directory: " + + tmpDir.getAbsolutePath()); + + } + + } + + if(!tmpDir.isDirectory()) { + + throw new RuntimeException("Not a directory: " + + tmpDir.getAbsolutePath()); + + } + + /* * "readOnly" */ *************** *** 492,495 **** --- 505,573 ---- /* + * "deleteOnExit" + */ + + val = properties.getProperty(Options.DELETE_ON_EXIT); + + if (val != null) { + + deleteOnExit = Boolean.parseBoolean(val); + + } + + /* + * "file" + */ + + File file; + + if(bufferMode==BufferMode.Transient) { + + file = null; + + } else { + + val = properties.getProperty(Options.FILE); + + if(createTempFile && val != null) { + + throw new RuntimeException("Can not use option '" + + Options.CREATE_TEMP_FILE + "' with option '" + + Options.FILE + "'"); + + } + + if( createTempFile ) { + + try { + + val = File.createTempFile("bigdata-" + bufferMode + "-", + ".jnl", tmpDir).toString(); + + // // the file that gets opened. + // properties.setProperty(Options.FILE, val); + // // turn off this property to facilitate re-open of the same file. + // properties.setProperty(Options.CREATE_TEMP_FILE,"false"); + + } catch(IOException ex) { + + throw new RuntimeException(ex); + + } + + } + + if (val == null) { + + throw new RuntimeException("Required property: '" + + Options.FILE + "'"); + + } + + file = new File(val); + + } + + /* * Create the appropriate IBufferStrategy object. */ *************** *** 541,564 **** /* - * "file" - */ - - val = properties.getProperty(Options.FILE); - - if (val == null) { - - throw new RuntimeException("Required property: '" - + Options.FILE + "'"); - - } - - File file = new File(val); - - /* * Setup the buffer strategy. */ FileMetadata fileMetadata = new FileMetadata(segmentId, file, ! BufferMode.Direct, useDirectBuffers, initialExtent, create, readOnly, forceWrites); --- 619,628 ---- /* * Setup the buffer strategy. */ FileMetadata fileMetadata = new FileMetadata(segmentId, file, ! BufferMode.Direct, useDirectBuffers, initialExtent, ! maximumExtent, create, isEmptyFile, deleteOnExit, readOnly, forceWrites); *************** *** 575,598 **** /* - * "file" - */ - - val = properties.getProperty(Options.FILE); - - if (val == null) { - - throw new RuntimeException("Required property: '" - + Options.FILE + "'"); - - } - - File file = new File(val); - - /* * Setup the buffer strategy. */ FileMetadata fileMetadata = new FileMetadata(segmentId, file, ! BufferMode.Mapped, useDirectBuffers, initialExtent, create, readOnly, forceWrites); --- 639,648 ---- /* * Setup the buffer strategy. */ FileMetadata fileMetadata = new FileMetadata(segmentId, file, ! BufferMode.Mapped, useDirectBuffers, initialExtent, ! maximumExtent, create, isEmptyFile, deleteOnExit, readOnly, forceWrites); *************** *** 609,632 **** /* - * "file" - */ - - val = properties.getProperty(Options.FILE); - - if (val == null) { - - throw new RuntimeException("Required property: '" - + Options.FILE + "'"); - - } - - File file = new File(val); - - /* * Setup the buffer strategy. */ FileMetadata fileMetadata = new FileMetadata(segmentId, file, ! BufferMode.Disk, useDirectBuffers, initialExtent, create, readOnly, forceWrites); --- 659,668 ---- /* * Setup the buffer strategy. */ FileMetadata fileMetadata = new FileMetadata(segmentId, file, ! BufferMode.Disk, useDirectBuffers, initialExtent, ! maximumExtent, create, isEmptyFile, deleteOnExit, readOnly, forceWrites); *************** *** 706,712 **** } ! /** ! * Close immediately. ! */ public void close() { --- 742,751 ---- } ! public File getFile() { ! ! return _bufferStrategy.getFile(); ! ! } ! public void close() { *************** *** 731,734 **** --- 770,785 ---- } + public void closeAndDelete() { + + close(); + + if (!deleteOnClose) { + + _bufferStrategy.deleteFile(); + + } + + } + private void assertOpen() { *************** *** 1282,1298 **** /** - * Return the {@link ICommitRecord} for the most recent committed state - * whose commit timestamp is less than or equal to <i>timestamp</i>. This - * is used by a {@link Tx transaction} to locate the committed state that is - * the basis for its operations. - * - * @param timestamp - * Typically, the timestamp assigned to a transaction. - * - * @return The {@link ICommitRecord} for the most recent committed state - * whose commit timestamp is less than or equal to <i>timestamp</i> - * -or- <code>null</code> iff there are no {@link ICommitRecord}s - * that satisify the probe. - * * @todo the {@link CommitRecordIndex} is a possible source of thread * contention since transactions need to use this code path in order --- 1333,1336 ---- *************** *** 1301,1307 **** * handling this. */ ! public ICommitRecord getCommitRecord(long timestamp) { ! return _commitRecordIndex.find(timestamp); } --- 1339,1345 ---- * handling this. */ ! public ICommitRecord getCommitRecord(long comitTime) { ! return _commitRecordIndex.find(comitTime); } *************** *** 1326,1339 **** } /* * ITransactionManager and friends. * ! * @todo refactor into a service. provide an implementation that supports ! * only a single Journal resource and an implementation that supports a ! * scale up/out architecture. the journal should resolve the service using ! * JINI. the timestamp service should probably be co-located with the ! * transaction service. */ /** --- 1364,1387 ---- } + /* * ITransactionManager and friends. * ! * @todo refactor into an ITransactionManager service. provide an ! * implementation that supports only a single Journal resource and an ! * implementation that supports a scale up/out architecture. the journal ! * should resolve the service using JINI. the timestamp service should ! * probably be co-located with the transaction service. ! */ ! ! /** ! * The service used to generate commit timestamps. ! * ! * @todo parameterize using {@link Options} so that we can resolve a ! * low-latency service for use with a distributed database commit ! * protocol. */ + protected final ITimestampService timestampFactory = LocalTimestampService.INSTANCE; /** *************** *** 1373,1378 **** // + _rootBlock.getCommitCounter()); ! return new Tx(this, timestampFactory.nextTimestamp(), readOnly) ! .getStartTimestamp(); } --- 1421,1429 ---- // + _rootBlock.getCommitCounter()); ! final long startTime = nextTimestamp(); ! ! new Tx(this, startTime, readOnly); ! ! return startTime; } *************** *** 1384,1399 **** } - public IIndex getIndex(String name, long ts) { - - if(name == null) throw new IllegalArgumentException(); - - ITx tx = activeTx.get(ts); - - if(tx==null) throw new IllegalStateException(); - - return tx.getIndex(name); - - } - public void abort(long ts) { --- 1435,1438 ---- *************** *** 1650,1653 **** --- 1689,1710 ---- } + + public long nextTimestamp() { + + return timestampFactory.nextTimestamp(); + + } + + public IIndex getIndex(String name, long ts) { + + if(name == null) throw new IllegalArgumentException(); + + ITx tx = activeTx.get(ts); + + if(tx==null) throw new IllegalStateException(); + + return tx.getIndex(name); + + } } Index: MappedBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/MappedBufferStrategy.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** MappedBufferStrategy.java 21 Feb 2007 20:17:21 -0000 1.9 --- MappedBufferStrategy.java 22 Feb 2007 16:59:34 -0000 1.10 *************** *** 5,9 **** import com.bigdata.rawstore.Addr; - import com.bigdata.scaleup.PartitionedJournal; /** --- 5,8 ---- *************** *** 19,23 **** * Note: Extension and truncation of a mapped file are not possible with the JDK * since there is no way to guarentee that the mapped file will be unmapped in a ! * timely manner. * </p> * --- 18,31 ---- * Note: Extension and truncation of a mapped file are not possible with the JDK * since there is no way to guarentee that the mapped file will be unmapped in a ! * timely manner. Journals that handle {@link IJournal#overflow()} should ! * trigger overflow just a bit earlier for a {@link MappedByteBuffer} in an ! * attempt to avoid running out of space in the journal. If a transaction can ! * not be committed due to overflow, it could be re-committed <em>after</em> ! * handling the overflow event (e.g., throw a "CommitRetryException"). ! * </p> ! * <p> ! * Note that the use of mapped files might not prove worth the candle due to the ! * difficulties with resource deallocation for this strategy and the good ! * performance of some alternative strategies. * </p> * *************** *** 27,38 **** * @see http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038 * @see BufferMode#Mapped - * - * @todo since the mapped file can not be extended or truncated, use of mapped - * files pretty much suggests that we pre-extend to the maximum allowable - * extent. it something can not be committed due to overflow, then a tx - * could be re-committed after a rollover of a {@link PartitionedJournal}. - * Note that the use of mapped files might not prove worth the candle due - * to the difficulties with resource deallocation for this strategy and - * the good performance of some alternative strategies. */ public class MappedBufferStrategy extends DiskBackedBufferStrategy { --- 35,38 ---- Index: BlockWriteCache.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/BlockWriteCache.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** BlockWriteCache.java 21 Feb 2007 20:17:21 -0000 1.2 --- BlockWriteCache.java 22 Feb 2007 16:59:34 -0000 1.3 *************** *** 80,84 **** * </p> * ! * FIXME Implement and integrate this buffering strategy in order to recover an * approximately 5x overhead introduced by doing too many IOs. * --- 80,84 ---- * </p> * ! * @todo Implement and integrate this buffering strategy in order to recover an * approximately 5x overhead introduced by doing too many IOs. * Index: DiskBackedBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/DiskBackedBufferStrategy.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** DiskBackedBufferStrategy.java 15 Feb 2007 22:01:18 -0000 1.9 --- DiskBackedBufferStrategy.java 22 Feb 2007 16:59:34 -0000 1.10 *************** *** 104,107 **** --- 104,119 ---- } + public void closeAndDelete() { + + close(); + + if(!file.delete()) { + + System.err.println("WARN: Could not delete: "+file.getAbsolutePath()); + + } + + } + public void deleteFile() { Index: TransientBufferStrategy.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/TransientBufferStrategy.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TransientBufferStrategy.java 21 Feb 2007 20:17:21 -0000 1.9 --- TransientBufferStrategy.java 22 Feb 2007 16:59:34 -0000 1.10 *************** *** 1,7 **** package com.bigdata.journal; import java.nio.ByteBuffer; - /** * Transient buffer strategy uses a direct buffer but never writes on disk. --- 1,7 ---- package com.bigdata.journal; + import java.io.File; import java.nio.ByteBuffer; /** * Transient buffer strategy uses a direct buffer but never writes on disk. *************** *** 39,46 **** BufferMode.Transient, // (useDirectBuffers ? ByteBuffer ! .allocateDirect((int) assertNonDiskExtent(initialExtent)) ! : ByteBuffer ! .allocate((int) assertNonDiskExtent(initialExtent))) ! ); open = true; --- 39,44 ---- BufferMode.Transient, // (useDirectBuffers ? ByteBuffer ! .allocateDirect((int) initialExtent) : ByteBuffer ! .allocate((int) initialExtent))); open = true; *************** *** 61,64 **** --- 59,71 ---- } + + /** + * Always returns <code>null</code>. + */ + public File getFile() { + + return null; + + } public void close() { *************** *** 70,79 **** } - // force(true); - open = false; } final public boolean isOpen() { --- 77,90 ---- } open = false; } + public void closeAndDelete() { + + close(); + + } + final public boolean isOpen() { Index: Tx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/Tx.java,v retrieving revision 1.32 retrieving revision 1.33 diff -C2 -d -r1.32 -r1.33 *** Tx.java 21 Feb 2007 20:17:21 -0000 1.32 --- Tx.java 22 Feb 2007 16:59:34 -0000 1.33 *************** *** 214,226 **** /** - * Create a fully isolated read-write transaction. - */ - public Tx(Journal journal,long timestamp) { - - this(journal, timestamp, false); - - } - - /** * Create a transaction starting the last committed state of the journal as * of the specified startTime. --- 214,217 ---- *************** *** 408,412 **** * support a distributed database commit protocol. */ ! commitTimestamp = journal.timestampFactory.nextTimestamp(); try { --- 399,403 ---- * support a distributed database commit protocol. */ ! commitTimestamp = journal.nextTimestamp(); try { |
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/test/com/bigdata/objndx Modified Files: TestIndexSegmentWithBloomFilter.java TestIndexSegmentBuilderWithLargeTrees.java AbstractBTreeTestCase.java TestRestartSafe.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: TestRestartSafe.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestRestartSafe.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TestRestartSafe.java 13 Feb 2007 23:01:10 -0000 1.4 --- TestRestartSafe.java 22 Feb 2007 16:59:35 -0000 1.5 *************** *** 54,58 **** import org.apache.log4j.Level; - import com.bigdata.cache.HardReferenceQueue; import com.bigdata.journal.BufferMode; import com.bigdata.journal.Journal; --- 54,57 ---- *************** *** 88,93 **** properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.FILE, getName()+".jnl"); ! properties.setProperty(Options.DELETE_ON_CLOSE,"false"); } --- 87,94 ---- properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.CREATE_TEMP_FILE, "true"); ! // properties.setProperty(Options.FILE, getName()+".jnl"); ! // properties.setProperty(Options.DELETE_ON_CLOSE,"false"); ! properties.setProperty(Options.DELETE_ON_EXIT,"true"); } *************** *** 100,103 **** --- 101,138 ---- /** + * Re-open the same backing store. + * + * @param store + * the existing store. + * + * @return A new store. + * + * @exception Throwable + * if the existing store is not closed, e.g., from failure to + * obtain a file lock, etc. + */ + protected Journal reopenStore(Journal store) { + + // close the store. + store.close(); + + Properties properties = (Properties)getProperties().clone(); + + // Turn this off now since we want to re-open the same store. + properties.setProperty(Options.CREATE_TEMP_FILE,"false"); + + // The backing file that we need to re-open. + File file = store.getFile(); + + assertNotNull(file); + + // Set the file property explictly. + properties.setProperty(Options.FILE,file.toString()); + + return new Journal( properties ); + + } + + /** * Return a btree backed by a journal with the indicated branching factor. * The serializer requires that values in leaves are {@link SimpleEntry} *************** *** 109,160 **** * @return The btree. */ ! public BTree getBTree(int branchingFactor,Journal journal) { ! // try { ! // ! // Properties properties = getProperties(); ! // A modest leaf queue capacity. ! final int leafQueueCapacity = 500; - final int nscan = 10; - - BTree btree = new BTree(journal, - branchingFactor, - new HardReferenceQueue<PO>(new DefaultEvictionListener(), - leafQueueCapacity, nscan), - SimpleEntry.Serializer.INSTANCE, - null // no record compressor - ); - - return btree; - // - // } catch (IOException ex) { - // - // throw new RuntimeException(ex); - // - // } - } /** * - * FIXME develop test to use IStore.getBTree(name) and commit callback protocol. - * * @throws IOException */ public void test_restartSafe01() throws IOException { ! Properties properties = getProperties(); ! ! File file = new File(properties.getProperty(Options.FILE)); ! ! if(file.exists() && !file.delete()) { ! ! fail("Could not delete file: "+file.getAbsoluteFile()); ! ! } ! ! Journal journal = new Journal(properties); final int m = 3; --- 144,163 ---- * @return The btree. */ ! public BTree getBTree(int branchingFactor, Journal journal) { ! BTree btree = new BTree(journal, branchingFactor, ! SimpleEntry.Serializer.INSTANCE); ! return btree; } /** * * @throws IOException */ public void test_restartSafe01() throws IOException { ! Journal journal = new Journal(getProperties()); final int m = 3; *************** *** 192,197 **** journal.commit(); - journal.close(); - } --- 195,198 ---- *************** *** 201,205 **** { ! journal = new Journal(properties); final BTree btree = new BTree(journal, BTreeMetadata.read(journal, --- 202,206 ---- { ! journal = reopenStore(journal); final BTree btree = new BTree(journal, BTreeMetadata.read(journal, *************** *** 212,216 **** btree.entryIterator()); ! journal.close(); } --- 213,217 ---- btree.entryIterator()); ! journal.closeAndDelete(); } Index: TestIndexSegmentBuilderWithLargeTrees.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestIndexSegmentBuilderWithLargeTrees.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** TestIndexSegmentBuilderWithLargeTrees.java 8 Feb 2007 21:32:11 -0000 1.9 --- TestIndexSegmentBuilderWithLargeTrees.java 22 Feb 2007 16:59:35 -0000 1.10 *************** *** 56,60 **** import com.bigdata.journal.Journal; import com.bigdata.journal.Options; - import com.bigdata.rawstore.Bytes; /** --- 56,59 ---- *************** *** 93,99 **** properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.FILE, getName()+".jnl"); ! ! properties.setProperty(Options.INITIAL_EXTENT, ""+Bytes.megabyte*20); } --- 92,96 ---- properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.CREATE_TEMP_FILE, "true"); } *************** *** 117,150 **** public BTree getBTree(int branchingFactor) { ! Properties properties = getProperties(); ! ! String filename = properties.getProperty(Options.FILE); ! ! if (filename != null) { ! ! File file = new File(filename); ! ! if (file.exists() && !file.delete()) { ! ! throw new RuntimeException("Could not delete file: " ! + file.getAbsoluteFile()); ! ! } ! ! } ! ! System.err.println("Opening journal: " + filename); ! Journal journal = new Journal(properties); ! ! // A modest leaf queue capacity. ! final int leafQueueCapacity = 500; ! ! final int nscan = 10; BTree btree = new BTree(journal, branchingFactor, ! new HardReferenceQueue<PO>(new DefaultEvictionListener(), ! leafQueueCapacity, nscan), ! SimpleEntry.Serializer.INSTANCE, null // no record compressor ! ); return btree; --- 114,121 ---- public BTree getBTree(int branchingFactor) { ! Journal journal = new Journal(getProperties()); BTree btree = new BTree(journal, branchingFactor, ! SimpleEntry.Serializer.INSTANCE); return btree; *************** *** 334,338 **** System.err.println("Closing index segment."); seg.close(); ! if (!outFile.delete()) { --- 305,309 ---- System.err.println("Closing index segment."); seg.close(); ! if (!outFile.delete()) { *************** *** 347,351 **** */ System.err.println("Closing journal."); ! btree.getStore().close(); } --- 318,322 ---- */ System.err.println("Closing journal."); ! btree.getStore().closeAndDelete(); } Index: TestIndexSegmentWithBloomFilter.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestIndexSegmentWithBloomFilter.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TestIndexSegmentWithBloomFilter.java 8 Feb 2007 21:32:11 -0000 1.7 --- TestIndexSegmentWithBloomFilter.java 22 Feb 2007 16:59:35 -0000 1.8 *************** *** 100,106 **** properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.FILE, getName()+".jnl"); ! ! properties.setProperty(Options.INITIAL_EXTENT, ""+Bytes.megabyte*20); } --- 100,104 ---- properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.CREATE_TEMP_FILE, "true"); } *************** *** 452,456 **** */ System.err.println("Closing journal."); ! btree.getStore().close(); } --- 450,454 ---- */ System.err.println("Closing journal."); ! btree.getStore().closeAndDelete(); } Index: AbstractBTreeTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/AbstractBTreeTestCase.java,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -d -r1.29 -r1.30 *** AbstractBTreeTestCase.java 9 Feb 2007 18:56:59 -0000 1.29 --- AbstractBTreeTestCase.java 22 Feb 2007 16:59:35 -0000 1.30 *************** *** 905,911 **** for( int i=0; i<20; i++ ) { ! doInsertRandomKeySequenceTest(m, m, trace); ! doInsertRandomSparseKeySequenceTest(m, m, trace); } --- 905,911 ---- for( int i=0; i<20; i++ ) { ! doInsertRandomKeySequenceTest(m, m, trace).getStore().closeAndDelete(); ! doInsertRandomSparseKeySequenceTest(m, m, trace).getStore().closeAndDelete(); } *************** *** 916,922 **** for( int i=0; i<20; i++ ) { ! doInsertRandomKeySequenceTest(m, m*m, trace); ! doInsertRandomSparseKeySequenceTest(m, m*m, trace); } --- 916,922 ---- for( int i=0; i<20; i++ ) { ! doInsertRandomKeySequenceTest(m, m*m, trace).getStore().closeAndDelete(); ! doInsertRandomSparseKeySequenceTest(m, m*m, trace).getStore().closeAndDelete(); } *************** *** 927,933 **** for( int i=0; i<20; i++ ) { ! doInsertRandomKeySequenceTest(m, m*m*m, trace); ! doInsertRandomSparseKeySequenceTest(m, m*m*m, trace); } --- 927,933 ---- for( int i=0; i<20; i++ ) { ! doInsertRandomKeySequenceTest(m, m*m*m, trace).getStore().closeAndDelete(); ! doInsertRandomSparseKeySequenceTest(m, m*m*m, trace).getStore().closeAndDelete(); } *************** *** 938,944 **** // for( int i=0; i<20; i++ ) { // ! // doInsertRandomKeySequenceTest(m, m*m*m*m, trace); // ! // doInsertRandomSparseKeySequenceTest(m, m*m*m*m, trace); // // } --- 938,944 ---- // for( int i=0; i<20; i++ ) { // ! // doInsertRandomKeySequenceTest(m, m*m*m*m, trace).getStore().close(); // ! // doInsertRandomSparseKeySequenceTest(m, m*m*m*m, trace).getStore().close(); // // } *************** *** 957,961 **** * The trace level (zero disables most tracing). */ ! public void doInsertRandomKeySequenceTest(int m, int ninserts, int trace) { /* --- 957,961 ---- * The trace level (zero disables most tracing). */ ! public BTree doInsertRandomKeySequenceTest(int m, int ninserts, int trace) { /* *************** *** 975,979 **** } ! doInsertRandomKeySequenceTest(m, keys, entries, trace); } --- 975,979 ---- } ! return doInsertRandomKeySequenceTest(m, keys, entries, trace); } *************** *** 1433,1436 **** --- 1433,1439 ---- log.info(btree.counters.toString()); + btree.getStore().closeAndDelete(); + + } *************** *** 1527,1530 **** --- 1530,1535 ---- log.info(btree.counters.toString()); + btree.getStore().closeAndDelete(); + } |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:39
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/java/com/bigdata/scaleup Modified Files: PartitionedJournal.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: PartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionedJournal.java,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** PartitionedJournal.java 21 Feb 2007 20:17:22 -0000 1.9 --- PartitionedJournal.java 22 Feb 2007 16:59:35 -0000 1.10 *************** *** 55,60 **** --- 55,62 ---- import com.bigdata.journal.CommitRecordIndex; + import com.bigdata.journal.ICommitRecord; import com.bigdata.journal.ICommitter; import com.bigdata.journal.IJournal; + import com.bigdata.journal.IRootBlockView; import com.bigdata.journal.Journal; import com.bigdata.journal.Name2Addr.Entry; *************** *** 280,288 **** /** - * The recommened extension for slave files. - */ - public static final String JNL = ".jnl"; - - /** * The recommened extension for index segment files. */ --- 282,285 ---- *************** *** 343,346 **** --- 340,346 ---- * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ + * + * @todo support the use of temporary files for the partitioned journal and + * their clean removal on exit. */ public static class Options extends com.bigdata.journal.Options { *************** *** 369,380 **** /** - * <code>tmp.dir</code> - The property whose value is the name of the - * directory in which temporary files will be created. When not - * specified the default is governed by the value of the System - * property named <code>java.io.tmpdir</code> - */ - public static final String TMP_DIR = "tmp.dir"; - - /** * <code>migrationThreshold</code> - The name of a property whose * value is the minimum #of entries in a btree before the btree will be --- 369,372 ---- *************** *** 464,484 **** } - // "tmp.dir" - val = properties.getProperty(Options.TMP_DIR); - - tmpDir = val == null ? new File(System.getProperty("java.io.tmpdir")) - : new File(val); - - if (!tmpDir.exists()) { - - if (!tmpDir.mkdirs()) { - - throw new RuntimeException("Could not create directory: " - + tmpDir.getAbsolutePath()); - - } - - } - /* * Scan the slave directory for basename*.jnl. Open the most current --- 456,459 ---- *************** *** 487,491 **** File[] journalFiles = new NameAndExtensionFilter(new File(journalDir, ! basename).toString(), JNL).getFiles(); final File file; --- 462,466 ---- File[] journalFiles = new NameAndExtensionFilter(new File(journalDir, ! basename).toString(), Options.JNL).getFiles(); final File file; *************** *** 493,497 **** if(journalFiles.length==0) { ! file = new File(journalDir,basename+JNL); } else if(journalFiles.length==1) { --- 468,472 ---- if(journalFiles.length==0) { ! file = new File(journalDir,basename+Options.JNL); } else if(journalFiles.length==1) { *************** *** 543,546 **** --- 518,523 ---- */ this.slave = createSlave(this,properties); + + this.tmpDir = slave.tmpDir; } *************** *** 795,802 **** // immediate shutdown of the old journal. ! oldJournal.close(); ! // delete the old backing file (if any). ! oldJournal.getBufferStrategy().deleteFile(); } --- 772,779 ---- // immediate shutdown of the old journal. ! oldJournal.closeAndDelete(); ! // // delete the old backing file (if any). ! // oldJournal.getBufferStrategy().deleteFile(); } *************** *** 1115,1119 **** try { ! file = File.createTempFile(basename,JNL, journalDir); if (!file.delete()) { --- 1092,1096 ---- try { ! file = File.createTempFile(basename,Options.JNL, journalDir); if (!file.delete()) { *************** *** 1141,1148 **** --- 1118,1137 ---- } + /** + * Return the file for the current {@link SlaveJournal}. + */ + public File getFile() { + return slave.getFile(); + } + public void close() { slave.close(); } + public void closeAndDelete() { + // @todo implement full delete on the database. + slave.closeAndDelete(); + } + public void force(boolean metadata) { slave.force(metadata); *************** *** 1237,1239 **** --- 1226,1240 ---- } + public ICommitRecord getCommitRecord(long commitTime) { + return slave.getCommitRecord(commitTime); + } + + public IRootBlockView getRootBlockView() { + return slave.getRootBlockView(); + } + + public long nextTimestamp() { + return slave.nextTimestamp(); + } + } |
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/test/com/bigdata/journal Modified Files: TestTxRunState.java TestDiskJournal.java AbstractMROWTestCase.java AbstractBufferStrategyTestCase.java ProxyTestCase.java AbstractBTreeWithJournalTestCase.java TestCommitHistory.java ComparisonTestDriver.java AbstractRestartSafeTestCase.java TestMappedJournal.java TestConflictResolution.java TestDirectJournal.java AbstractTestCase.java TestTx.java TestAll.java TestReadOnlyTx.java BenchmarkJournalWriteRate.java TestNamedIndices.java StressTestConcurrent.java TestTxJournalProtocol.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: AbstractBTreeWithJournalTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractBTreeWithJournalTestCase.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** AbstractBTreeWithJournalTestCase.java 21 Feb 2007 20:17:20 -0000 1.1 --- AbstractBTreeWithJournalTestCase.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 81,84 **** --- 81,100 ---- properties.setProperty(Options.BUFFER_MODE, getBufferMode().toString() ); + /* + * Use a temporary file for the test. Such files are always deleted when + * the journal is closed or the VM exits. + * + * Note: Your unit test must close the store for delete to work. + */ + properties.setProperty(Options.CREATE_TEMP_FILE,"true"); + // properties.setProperty(Options.DELETE_ON_CLOSE,"true"); + properties.setProperty(Options.DELETE_ON_EXIT,"true"); + + properties.setProperty(Options.SEGMENT, "0"); + + // // Note: also deletes the file before it is used. + // properties.setProperty(Options.FILE, AbstractTestCase + // .getTestJournalFile(getName(), properties)); + } *************** *** 101,107 **** public BTree getBTree(int branchingFactor) { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); BTree btree = new BTree(journal, branchingFactor, --- 117,121 ---- public BTree getBTree(int branchingFactor) { ! Journal journal = new Journal(getProperties()); BTree btree = new BTree(journal, branchingFactor, *************** *** 112,124 **** } ! /* ! * @todo try large branching factors, but limit the total #of keys inserted ! * or the running time will be too long (I am using an expontential #of keys ! * by default). ! * ! * Note: For sequential keys, m=128 causes the journal to exceed its initial ! * extent. */ ! int[] branchingFactors = new int[]{3,4,5,10};//,20,64};//,128};//,512}; /** --- 126,137 ---- } ! /** ! * The branching factors that will be used in the stress tests. The larger ! * the branching factor, the longer the run for these tests. The very small ! * branching factors (3, 4) test the btree code more fully since they will ! * exercise the fence posts on the invariants for nodes and leaves on pretty ! * much each mutation. */ ! int[] branchingFactors = new int[]{3,4};//,5,10,20,64};//,128};//,512}; /** *************** *** 132,143 **** int m = branchingFactors[i]; ! doSplitWithIncreasingKeySequence( getBTree(m), m, m ); ! doSplitWithIncreasingKeySequence( getBTree(m), m, m*m ); ! doSplitWithIncreasingKeySequence( getBTree(m), m, m*m*m ); ! doSplitWithIncreasingKeySequence( getBTree(m), m, m*m*m*m ); } --- 145,188 ---- int m = branchingFactors[i]; ! { ! ! BTree btree = getBTree(m); ! ! doSplitWithIncreasingKeySequence( btree, m, m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! { ! BTree btree = getBTree(m); ! ! doSplitWithIncreasingKeySequence( btree, m, m*m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! ! { ! BTree btree = getBTree(m); ! ! doSplitWithIncreasingKeySequence( btree, m, m*m*m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! ! { + BTree btree = getBTree(m); + + doSplitWithIncreasingKeySequence( btree, m, m*m*m*m ); + + btree.getStore().closeAndDelete(); + + } + } *************** *** 153,164 **** int m = branchingFactors[i]; ! doSplitWithDecreasingKeySequence( getBTree(m), m, m ); ! doSplitWithDecreasingKeySequence( getBTree(m), m, m*m ); ! doSplitWithDecreasingKeySequence( getBTree(m), m, m*m*m ); ! doSplitWithDecreasingKeySequence( getBTree(m), m, m*m*m*m ); } --- 198,241 ---- int m = branchingFactors[i]; + + { + + BTree btree = getBTree(m); + + doSplitWithDecreasingKeySequence( btree, m, m ); + + btree.getStore().closeAndDelete(); + + } ! { ! ! BTree btree = getBTree(m); ! ! doSplitWithDecreasingKeySequence( btree, m, m*m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! { ! BTree btree = getBTree(m); ! ! doSplitWithDecreasingKeySequence( btree, m, m*m*m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! ! { ! BTree btree = getBTree(m); ! ! doSplitWithDecreasingKeySequence( btree, m, m*m*m*m ); ! ! btree.getStore().closeAndDelete(); ! ! } } *************** *** 176,189 **** int m = branchingFactors[i]; ! doSplitWithRandomDenseKeySequence( getBTree(m), m, m ); ! doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m ); ! doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m ); ! // This case overflows the default journal extent. ! // doSplitWithRandomKeySequence( getBTree(m), m, m*m*m*m ); ! } } --- 253,287 ---- int m = branchingFactors[i]; ! { ! BTree btree = getBTree(m); ! ! doSplitWithRandomDenseKeySequence( btree, m, m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! ! { ! ! BTree btree = getBTree(m); ! ! doSplitWithRandomDenseKeySequence( btree, m, m*m ); ! ! btree.getStore().closeAndDelete(); ! ! } ! { ! BTree btree = getBTree(m); ! doSplitWithRandomDenseKeySequence( btree, m, m * m * m); ! ! btree.getStore().closeAndDelete(); ! ! } ! ! } } Index: TestConflictResolution.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestConflictResolution.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestConflictResolution.java 19 Feb 2007 19:00:18 -0000 1.3 --- TestConflictResolution.java 22 Feb 2007 16:59:34 -0000 1.4 *************** *** 127,133 **** public void test_writeWriteConflict_correctDetection() { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 127,131 ---- public void test_writeWriteConflict_correctDetection() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 185,189 **** } ! journal.close(); } --- 183,187 ---- } ! journal.closeAndDelete(); } *************** *** 201,207 **** public void test_writeWriteConflict_conflictIsResolved() { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 199,203 ---- public void test_writeWriteConflict_conflictIsResolved() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 260,264 **** assertEquals(v1c,(byte[])journal.getIndex(name).lookup(k1)); ! journal.close(); } --- 256,260 ---- assertEquals(v1c,(byte[])journal.getIndex(name).lookup(k1)); ! journal.closeAndDelete(); } Index: TestTxRunState.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTxRunState.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestTxRunState.java 19 Feb 2007 19:00:18 -0000 1.3 --- TestTxRunState.java 22 Feb 2007 16:59:34 -0000 1.4 *************** *** 89,133 **** public void test_runStateMachine_activeAbort() throws IOException { ! final Properties properties = getProperties(); ! try { ! ! Journal journal = new Journal(properties); ! long ts0 = 0; ! ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0); ! assertEquals(ts0,tx0.getStartTimestamp()); ! ! assertTrue( tx0.isActive() ); ! assertFalse( tx0.isPrepared() ); ! assertFalse( tx0.isAborted() ); ! assertFalse( tx0.isCommitted() ); ! assertFalse( tx0.isComplete() ); ! ! assertTrue(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ! tx0.abort(); ! assertFalse( tx0.isActive() ); ! assertFalse( tx0.isPrepared() ); ! assertTrue( tx0.isAborted() ); ! assertFalse( tx0.isCommitted() ); ! assertTrue( tx0.isComplete() ); ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.close(); ! } finally { ! deleteTestJournalFile(); ! ! } } --- 89,123 ---- public void test_runStateMachine_activeAbort() throws IOException { ! Journal journal = new Journal(getProperties()); ! long ts0 = 0; ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal, ts0, false); ! assertEquals(ts0, tx0.getStartTimestamp()); ! assertTrue(tx0.isActive()); ! assertFalse(tx0.isPrepared()); ! assertFalse(tx0.isAborted()); ! assertFalse(tx0.isCommitted()); ! assertFalse(tx0.isComplete()); ! assertTrue(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! tx0.abort(); ! assertFalse(tx0.isActive()); ! assertFalse(tx0.isPrepared()); ! assertTrue(tx0.isAborted()); ! assertFalse(tx0.isCommitted()); ! assertTrue(tx0.isComplete()); ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ! journal.closeAndDelete(); } *************** *** 138,146 **** public void test_runStateMachine_activePrepareAbort() throws IOException { ! final Properties properties = getProperties(); ! ! try { ! ! Journal journal = new Journal(properties); long ts0 = 0; --- 128,132 ---- public void test_runStateMachine_activePrepareAbort() throws IOException { ! Journal journal = new Journal(getProperties()); long ts0 = 0; *************** *** 149,153 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0); assertEquals(ts0,tx0.getStartTimestamp()); --- 135,139 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0, false); assertEquals(ts0,tx0.getStartTimestamp()); *************** *** 183,193 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.close(); ! ! } finally { ! ! deleteTestJournalFile(); ! ! } } --- 169,173 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.closeAndDelete(); } *************** *** 198,206 **** public void test_runStateMachine_activePrepareCommit() throws IOException { ! final Properties properties = getProperties(); ! ! try { ! ! Journal journal = new Journal(properties); long ts0 = 0; --- 178,182 ---- public void test_runStateMachine_activePrepareCommit() throws IOException { ! Journal journal = new Journal(getProperties()); long ts0 = 0; *************** *** 209,213 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0); assertEquals(ts0,tx0.getStartTimestamp()); --- 185,189 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0, false); assertEquals(ts0,tx0.getStartTimestamp()); *************** *** 243,253 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.close(); ! ! } finally { ! ! deleteTestJournalFile(); ! ! } } --- 219,223 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.closeAndDelete(); } *************** *** 258,322 **** * not change the transaction run state. */ ! public void test_runStateMachine_activeAbortAbort_correctRejection() throws IOException { ! ! final Properties properties = getProperties(); ! ! try { ! ! Journal journal = new Journal(properties); ! long ts0 = 0; ! ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0); ! assertEquals(ts0,tx0.getStartTimestamp()); ! ! assertTrue( tx0.isActive() ); ! assertFalse( tx0.isPrepared() ); ! assertFalse( tx0.isAborted() ); ! assertFalse( tx0.isCommitted() ); ! assertFalse( tx0.isComplete() ); ! ! assertTrue(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ! tx0.abort(); ! assertFalse( tx0.isActive() ); ! assertFalse( tx0.isPrepared() ); ! assertTrue( tx0.isAborted() ); ! assertFalse( tx0.isCommitted() ); ! assertTrue( tx0.isComplete() ); ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! try { ! tx0.abort(); ! fail("Expecting: "+IllegalStateException.class); ! } ! catch( IllegalStateException ex ) { ! System.err.println("Ignoring expected exception: "+ex); ! } ! assertFalse( tx0.isActive() ); ! assertFalse( tx0.isPrepared() ); ! assertTrue( tx0.isAborted() ); ! assertFalse( tx0.isCommitted() ); ! assertTrue( tx0.isComplete() ); ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.close(); ! } finally { ! deleteTestJournalFile(); ! } } --- 228,282 ---- * not change the transaction run state. */ ! public void test_runStateMachine_activeAbortAbort_correctRejection() ! throws IOException { ! Journal journal = new Journal(getProperties()); ! long ts0 = 0; ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal, ts0, false); ! assertEquals(ts0, tx0.getStartTimestamp()); ! assertTrue(tx0.isActive()); ! assertFalse(tx0.isPrepared()); ! assertFalse(tx0.isAborted()); ! assertFalse(tx0.isCommitted()); ! assertFalse(tx0.isComplete()); ! assertTrue(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! tx0.abort(); ! assertFalse(tx0.isActive()); ! assertFalse(tx0.isPrepared()); ! assertTrue(tx0.isAborted()); ! assertFalse(tx0.isCommitted()); ! assertTrue(tx0.isComplete()); ! assertFalse(journal.activeTx.containsKey(ts0)); ! assertFalse(journal.preparedTx.containsKey(ts0)); ! try { ! tx0.abort(); ! fail("Expecting: " + IllegalStateException.class); ! } catch (IllegalStateException ex) { ! System.err.println("Ignoring expected exception: " + ex); } + assertFalse(tx0.isActive()); + assertFalse(tx0.isPrepared()); + assertTrue(tx0.isAborted()); + assertFalse(tx0.isCommitted()); + assertTrue(tx0.isComplete()); + + assertFalse(journal.activeTx.containsKey(ts0)); + assertFalse(journal.preparedTx.containsKey(ts0)); + + journal.closeAndDelete(); + } *************** *** 328,336 **** public void test_runStateMachine_activePreparePrepare_correctRejection() throws IOException { ! final Properties properties = getProperties(); ! ! try { ! ! Journal journal = new Journal(properties); long ts0 = 0; --- 288,292 ---- public void test_runStateMachine_activePreparePrepare_correctRejection() throws IOException { ! Journal journal = new Journal(getProperties()); long ts0 = 0; *************** *** 339,343 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0); assertEquals(ts0,tx0.getStartTimestamp()); --- 295,299 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0,false); assertEquals(ts0,tx0.getStartTimestamp()); *************** *** 379,389 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.close(); ! ! } finally { ! ! deleteTestJournalFile(); ! ! } } --- 335,339 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.closeAndDelete(); } *************** *** 396,404 **** public void test_runStateMachine_activeCommit_correctRejection() throws IOException { ! final Properties properties = getProperties(); ! ! try { ! ! Journal journal = new Journal(properties); long ts0 = 0; --- 346,350 ---- public void test_runStateMachine_activeCommit_correctRejection() throws IOException { ! Journal journal = new Journal(getProperties()); long ts0 = 0; *************** *** 407,411 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0); assertEquals(ts0,tx0.getStartTimestamp()); --- 353,357 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! ITx tx0 = new Tx(journal,ts0,false); assertEquals(ts0,tx0.getStartTimestamp()); *************** *** 436,446 **** assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.close(); ! ! } finally { ! ! deleteTestJournalFile(); ! ! } } --- 382,386 ---- assertFalse(journal.preparedTx.containsKey(ts0)); ! journal.closeAndDelete(); } *************** *** 455,461 **** throws IOException { ! final Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 395,399 ---- throws IOException { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 531,535 **** assertNull(journal.getTx(tmp.getStartTimestamp())); ! journal.close(); } --- 469,473 ---- assertNull(journal.getTx(tmp.getStartTimestamp())); ! journal.closeAndDelete(); } Index: ProxyTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/ProxyTestCase.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** ProxyTestCase.java 26 Oct 2006 19:41:37 -0000 1.2 --- ProxyTestCase.java 22 Feb 2007 16:59:34 -0000 1.3 *************** *** 176,207 **** return getOurDelegate().getProperties(); } - - // public IObjectManager getObjectManager() { - // return getOurDelegate().getObjectManager(); - // } - // - // public void dropStore() { - // getOurDelegate().dropStore(); - // } - // - // public boolean isStoreOpen() { - // return getOurDelegate().isStoreOpen(); - // } - // - // public IObjectManager reopenStore() { - // return getOurDelegate().reopenStore(); - // } - // - // public IObjectManager openStore() { - // return getOurDelegate().openStore(); - // } - // - // public void closeStore() { - // getOurDelegate().closeStore(); - // } - // - // public IGeneric addGeneric() { - // return getObjectManager().makeObject(); - // } } --- 176,182 ---- return getOurDelegate().getProperties(); } + public Journal reopenStore(Journal store) { + return getOurDelegate().reopenStore(store); + } } Index: TestNamedIndices.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestNamedIndices.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** TestNamedIndices.java 8 Feb 2007 21:32:09 -0000 1.1 --- TestNamedIndices.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 48,53 **** package com.bigdata.journal; - import java.util.Properties; - import com.bigdata.objndx.BTree; import com.bigdata.objndx.SimpleEntry; --- 48,51 ---- *************** *** 90,98 **** public void test_registerAndUse() { ! Properties properties = getProperties(); ! ! properties.setProperty(Options.DELETE_ON_CLOSE, "false"); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 88,92 ---- public void test_registerAndUse() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 116,127 **** journal.commit(); - journal.close(); - if (journal.isStable()) { ! /* * re-open the journal and test restart safety. */ ! journal = new Journal(properties); btree = (BTree) journal.getIndex(name); --- 110,119 ---- journal.commit(); if (journal.isStable()) { ! /* * re-open the journal and test restart safety. */ ! journal = reopenStore(journal); btree = (BTree) journal.getIndex(name); *************** *** 131,138 **** assertEquals(v0, btree.lookup(k0)); - journal.close(); - } } --- 123,130 ---- assertEquals(v0, btree.lookup(k0)); } + journal.closeAndDelete(); + } Index: TestDiskJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestDiskJournal.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TestDiskJournal.java 21 Feb 2007 20:17:20 -0000 1.7 --- TestDiskJournal.java 22 Feb 2007 16:59:34 -0000 1.8 *************** *** 114,120 **** properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); ! properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.FILE,getTestJournalFile(properties)); return properties; --- 114,120 ---- properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); ! // properties.setProperty(Options.SEGMENT, "0"); ! // properties.setProperty(Options.FILE,getTestJournalFile(properties)); return properties; *************** *** 129,158 **** */ public void test_create_disk01() throws IOException { ! final Properties properties = getProperties(); ! try { ! ! Journal journal = new Journal(properties); ! ! DiskOnlyStrategy bufferStrategy = (DiskOnlyStrategy) journal._bufferStrategy; ! ! assertTrue("isStable",bufferStrategy.isStable()); ! assertFalse("isFullyBuffered",bufferStrategy.isFullyBuffered()); ! assertEquals(Options.FILE, properties.getProperty(Options.FILE), bufferStrategy.file.toString()); ! assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, ! bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, ! bufferStrategy.getMaximumExtent()); ! assertNotNull("raf", bufferStrategy.raf); ! assertEquals(Options.BUFFER_MODE, BufferMode.Disk, bufferStrategy.getBufferMode()); ! journal.close(); ! ! } finally { ! ! deleteTestJournalFile(); ! ! } } --- 129,152 ---- */ public void test_create_disk01() throws IOException { ! final Properties properties = getProperties(); ! Journal journal = new Journal(properties); ! DiskOnlyStrategy bufferStrategy = (DiskOnlyStrategy) journal._bufferStrategy; ! ! assertTrue("isStable", bufferStrategy.isStable()); ! assertFalse("isFullyBuffered", bufferStrategy.isFullyBuffered()); ! // assertEquals(Options.FILE, properties.getProperty(Options.FILE), ! // bufferStrategy.file.toString()); ! assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, ! bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, ! bufferStrategy.getMaximumExtent()); ! assertNotNull("raf", bufferStrategy.raf); ! assertEquals(Options.BUFFER_MODE, BufferMode.Disk, bufferStrategy ! .getBufferMode()); ! ! journal.closeAndDelete(); } Index: AbstractTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractTestCase.java,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -d -r1.18 -r1.19 *** AbstractTestCase.java 21 Feb 2007 20:17:20 -0000 1.18 --- AbstractTestCase.java 22 Feb 2007 16:59:34 -0000 1.19 *************** *** 49,57 **** import java.io.File; - import java.io.IOException; import java.nio.ByteBuffer; import java.util.Properties; import java.util.Random; import junit.framework.TestCase; import junit.framework.TestCase2; --- 49,58 ---- import java.io.File; import java.nio.ByteBuffer; import java.util.Properties; import java.util.Random; + import com.bigdata.rawstore.IRawStore; + import junit.framework.TestCase; import junit.framework.TestCase2; *************** *** 106,109 **** --- 107,163 ---- + ":END:====================\n"); + deleteTestFile(); + + } + + public void tearDown() throws Exception { + + super.tearDown(); + + deleteTestFile(); + + } + + /** + * Note: your unit must close the store for delete to work. + */ + protected void deleteTestFile() { + + if(m_properties==null) return; // never requested. + + String val; + + val = (String) m_properties.getProperty(Options.FILE); + + if(val!= null) { + + File file = new File(val); + + if(file.exists()) { + + val = (String) m_properties.getProperty(Options.DELETE_ON_EXIT); + + if(val==null) { + + val = (String) m_properties.getProperty(Options.DELETE_ON_CLOSE); + + } + + if(Boolean.parseBoolean(val)) { + + System.err.println("Attempting to delete file: "+file); + + if(!file.delete()) { + + log.warn("Could not delete file: "+file); + + } + + } + + } + + } + } *************** *** 139,159 **** m_properties = super.getProperties(); ! } ! /* ! * Wrap up the cached properties so that they are not modifable by the ! * caller (no side effects between calls). ! */ ! Properties properties = new Properties( m_properties ); ! /* ! * The test files are always deleted when the journal is closed ! * normally. ! */ ! properties.setProperty(Options.DELETE_ON_CLOSE,"true"); ! return properties; } --- 193,250 ---- m_properties = super.getProperties(); + // m_properties = new Properties( m_properties ); + + /* + * Wrap up the cached properties so that they are not modifable by the + * caller (no side effects between calls). + */ ! /* ! * Use a temporary file for the test. Such files are always deleted when ! * the journal is closed or the VM exits. ! */ ! m_properties.setProperty(Options.CREATE_TEMP_FILE,"true"); ! // m_properties.setProperty(Options.DELETE_ON_CLOSE,"true"); ! m_properties.setProperty(Options.DELETE_ON_EXIT,"true"); ! ! m_properties.setProperty(Options.SEGMENT, "0"); ! ! } ! return m_properties; ! } ! ! /** ! * Re-open the same backing store. ! * ! * @param store ! * the existing store. ! * ! * @return A new store. ! * ! * @exception Throwable ! * if the existing store is not closed, e.g., from failure to ! * obtain a file lock, etc. ! */ ! protected Journal reopenStore(Journal store) { ! // close the store. ! store.close(); ! Properties properties = (Properties)getProperties().clone(); ! ! // Turn this off now since we want to re-open the same store. ! properties.setProperty(Options.CREATE_TEMP_FILE,"false"); ! ! // The backing file that we need to re-open. ! File file = store.getFile(); ! ! assertNotNull(file); ! ! // Set the file property explictly. ! properties.setProperty(Options.FILE,file.toString()); ! ! return new Journal( properties ); } *************** *** 192,307 **** /** ! * <p> ! * Return the name of a journal file to be used for a unit test. The file is ! * created using the temporary file creation mechanism, but it is then ! * deleted. Ideally the returned filename is unique for the scope of the ! * test and will not be reported by the journal as a "pre-existing" file. ! * </p> ! * <p> ! * Note: This method is not advised for performance tests in which the disk ! * allocation matters since the file is allocated in a directory choosen by ! * the OS. ! * </p> ! * ! * @param properties ! * The configured properties. This is used to extract metadata ! * about the journal test configuration that is included in the ! * generated filename. Therefore this method should be invoked ! * after you have set the properties, or at least the ! * {@link Options#BUFFER_MODE}. ! * ! * @return The unique filename. ! * ! * @see {@link #getProperties()}, which sets the "deleteOnClose" flag for ! * unit tests. ! */ ! protected String getTestJournalFile(Properties properties) { ! ! return getTestJournalFile(getName(),properties); ! ! } ! ! static public String getTestJournalFile(String name,Properties properties) { ! ! // Used to name the file. ! String bufferMode = properties.getProperty(Options.BUFFER_MODE); ! ! // Used to name the file. ! if( bufferMode == null ) bufferMode = "default"; ! ! try { ! ! // Create the temp. file. ! File tmp = File.createTempFile("test-" + bufferMode + "-" ! + name + "-", ".jnl"); ! ! // Delete the file otherwise the Journal will attempt to open it. ! if (!tmp.delete()) { ! ! throw new RuntimeException("Unable to remove empty test file: " ! + tmp); ! ! } ! ! // make sure that the file is eventually removed. ! tmp.deleteOnExit(); ! ! return tmp.toString(); ! ! } catch (IOException ex) { ! ! throw new RuntimeException(ex); ! ! } ! ! } ! ! /** ! * Version of {@link #deleteTestJournalFile(String)} that obtains the name ! * of the journal file from the {@link Options#FILE} property (if any) on ! * {@link #getProperties()}. ! */ ! protected void deleteTestJournalFile() { ! ! String filename = getProperties().getProperty(Options.FILE); ! ! if( filename != null ) { ! ! deleteTestJournalFile(filename); ! ! } ! ! } ! ! /** ! * Delete the test file (if any). Note that test files are NOT created when ! * testing the {@link BufferMode#Transient} journal. A warning message that ! * the file could not be deleted generally means that you forgot to close ! * the journal in your test. ! * ! * @param filename ! * The filename (optional). ! */ ! protected void deleteTestJournalFile(String filename) { ! ! if( filename == null ) return; ! ! try { ! ! File file = new File(filename); ! ! if ( file.exists() && ! file.delete()) { ! ! System.err.println("Warning: could not delete: " + file.getAbsolutePath()); ! ! } ! ! } catch (Throwable t) { ! ! System.err.println("Warning: " + t); ! ! } ! ! } /** --- 283,398 ---- /** ! // * <p> ! // * Return the name of a journal file to be used for a unit test. The file is ! // * created using the temporary file creation mechanism, but it is then ! // * deleted. Ideally the returned filename is unique for the scope of the ! // * test and will not be reported by the journal as a "pre-existing" file. ! // * </p> ! // * <p> ! // * Note: This method is not advised for performance tests in which the disk ! // * allocation matters since the file is allocated in a directory choosen by ! // * the OS. ! // * </p> ! // * ! // * @param properties ! // * The configured properties. This is used to extract metadata ! // * about the journal test configuration that is included in the ! // * generated filename. Therefore this method should be invoked ! // * after you have set the properties, or at least the ! // * {@link Options#BUFFER_MODE}. ! // * ! // * @return The unique filename. ! // * ! // * @see {@link #getProperties()}, which sets the "deleteOnClose" flag for ! // * unit tests. ! // */ ! // protected String getTestJournalFile(Properties properties) { ! // ! // return getTestJournalFile(getName(),properties); ! // ! // } ! // ! // static public String getTestJournalFile(String name,Properties properties) { ! // ! // // Used to name the file. ! // String bufferMode = properties.getProperty(Options.BUFFER_MODE); ! // ! // // Used to name the file. ! // if( bufferMode == null ) bufferMode = "default"; ! // ! // try { ! // ! // // Create the temp. file. ! // File tmp = File.createTempFile("test-" + bufferMode + "-" ! // + name + "-", ".jnl"); ! // ! // // Delete the file otherwise the Journal will attempt to open it. ! // if (!tmp.delete()) { ! // ! // throw new RuntimeException("Unable to remove empty test file: " ! // + tmp); ! // ! // } ! // ! // // make sure that the file is eventually removed. ! // tmp.deleteOnExit(); ! // ! // return tmp.toString(); ! // ! // } catch (IOException ex) { ! // ! // throw new RuntimeException(ex); ! // ! // } ! // ! // } ! // ! // /** ! // * Version of {@link #deleteTestJournalFile(String)} that obtains the name ! // * of the journal file from the {@link Options#FILE} property (if any) on ! // * {@link #getProperties()}. ! // */ ! // protected void deleteTestJournalFile() { ! // ! // String filename = getProperties().getProperty(Options.FILE); ! // ! // if( filename != null ) { ! // ! // deleteTestJournalFile(filename); ! // ! // } ! // ! // } ! // ! // /** ! // * Delete the test file (if any). Note that test files are NOT created when ! // * testing the {@link BufferMode#Transient} journal. A warning message that ! // * the file could not be deleted generally means that you forgot to close ! // * the journal in your test. ! // * ! // * @param filename ! // * The filename (optional). ! // */ ! // protected void deleteTestJournalFile(String filename) { ! // ! // if( filename == null ) return; ! // ! // try { ! // ! // File file = new File(filename); ! // ! // if ( file.exists() && ! file.delete()) { ! // ! // System.err.println("Warning: could not delete: " + file.getAbsolutePath()); ! // ! // } ! // ! // } catch (Throwable t) { ! // ! // System.err.println("Warning: " + t); ! // ! // } ! // ! // } /** Index: AbstractMROWTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractMROWTestCase.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** AbstractMROWTestCase.java 21 Feb 2007 20:17:20 -0000 1.1 --- AbstractMROWTestCase.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 281,284 **** --- 281,286 ---- + elapsed + "ms (" + nok * 1000 / elapsed + " reads per second); nwritten=" + nwritten); + + store.closeAndDelete(); } *************** *** 531,542 **** properties.setProperty(Options.SEGMENT, "0"); ! ! File file = File.createTempFile("bigdata", ".jnl"); ! ! file.deleteOnExit(); ! ! if(!file.delete()) fail("Could not remove temp file before test"); ! ! properties.setProperty(Options.FILE, file.toString()); Journal journal = new Journal(properties); --- 533,538 ---- properties.setProperty(Options.SEGMENT, "0"); ! ! properties.setProperty(Options.CREATE_TEMP_FILE,"true"); Journal journal = new Journal(properties); *************** *** 547,550 **** --- 543,552 ---- journal.shutdown(); + if(journal.getFile()!=null) { + + journal.getFile().delete(); + + } + } Index: AbstractBufferStrategyTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractBufferStrategyTestCase.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** AbstractBufferStrategyTestCase.java 21 Feb 2007 20:17:20 -0000 1.3 --- AbstractBufferStrategyTestCase.java 22 Feb 2007 16:59:34 -0000 1.4 *************** *** 89,97 **** .toString()); properties.setProperty(Options.SEGMENT, "0"); ! // Note: also deletes the file before it is used. ! properties.setProperty(Options.FILE, AbstractTestCase ! .getTestJournalFile(getName(), properties)); } --- 89,105 ---- .toString()); + /* + * Use a temporary file for the test. Such files are always deleted when + * the journal is closed or the VM exits. + */ + properties.setProperty(Options.CREATE_TEMP_FILE,"true"); + // properties.setProperty(Options.DELETE_ON_CLOSE,"true"); + properties.setProperty(Options.DELETE_ON_EXIT,"true"); + properties.setProperty(Options.SEGMENT, "0"); ! // // Note: also deletes the file before it is used. ! // properties.setProperty(Options.FILE, AbstractTestCase ! // .getTestJournalFile(getName(), properties)); } *************** *** 177,181 **** } ! store.close(); } --- 185,189 ---- } ! store.closeAndDelete(); } *************** *** 216,220 **** assertEquals("userExtent",userExtent, bufferStrategy.getUserExtent()); ! store.close(); } --- 224,228 ---- assertEquals("userExtent",userExtent, bufferStrategy.getUserExtent()); ! store.closeAndDelete(); } *************** *** 303,307 **** assertEquals(b2, bufferStrategy.read(addr2)); ! store.close(); } --- 311,315 ---- assertEquals(b2, bufferStrategy.read(addr2)); ! store.closeAndDelete(); } Index: ComparisonTestDriver.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/ComparisonTestDriver.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** ComparisonTestDriver.java 21 Feb 2007 20:17:20 -0000 1.1 --- ComparisonTestDriver.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 161,175 **** String name = sb.toString(); - /* - * Create a temporary file for the journal. Note that you must delete - * the temporary file before starting the journal since it is empty and - * just a placeholder for a unique filename. - */ - File file = File.createTempFile("bigdata", ".jnl"); - - file.deleteOnExit(); - - properties.setProperty(Options.FILE, file.toString()); - return new Condition(name,properties); --- 161,164 ---- *************** *** 257,262 **** // force delete of the files on close of the journal under test. ! properties.setProperty(Options.DELETE_ON_CLOSE,"true"); ! properties.setProperty(Options.SEGMENT, "0"); --- 246,251 ---- // force delete of the files on close of the journal under test. ! properties.setProperty(Options.CREATE_TEMP_FILE,"true"); ! // properties.setProperty(Options.DELETE_ON_CLOSE,"true"); properties.setProperty(Options.SEGMENT, "0"); *************** *** 306,316 **** Condition condition = itr.next(); - File file = new File(condition.properties - .getProperty(Options.FILE)); - - if (!file.delete()) - throw new AssertionError( - "Could not remove temp file before test"); - IComparisonTest test = (IComparisonTest) cl.newInstance(); --- 295,298 ---- Index: TestTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTx.java,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -d -r1.18 -r1.19 *** TestTx.java 21 Feb 2007 20:17:20 -0000 1.18 --- TestTx.java 22 Feb 2007 16:59:34 -0000 1.19 *************** *** 92,98 **** public void test_noIndicesRegistered() { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); journal.commit(); --- 92,96 ---- public void test_noIndicesRegistered() { ! Journal journal = new Journal(getProperties()); journal.commit(); *************** *** 107,111 **** assertTrue(journal.commit(tx)!=0L); ! journal.close(); } --- 105,109 ---- assertTrue(journal.commit(tx)!=0L); ! journal.closeAndDelete(); } *************** *** 118,124 **** public void test_indexNotVisibleUnlessCommitted() { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 116,120 ---- public void test_indexNotVisibleUnlessCommitted() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 149,153 **** journal.abort(tx2); ! journal.close(); } --- 145,193 ---- journal.abort(tx2); ! journal.closeAndDelete(); ! ! } ! ! /** ! * Test verifies that you always get the same object back when you ask for ! * an isolated named index. This is important both to conserve resources ! * and since the write set is in the isolated index -- you lose it and it ! * is gone. ! */ ! public void test_sameIndexObject() { ! ! Journal journal = new Journal(getProperties()); ! ! final String name = "abc"; ! ! { ! ! journal.registerIndex(name, new UnisolatedBTree(journal)); ! ! journal.commit(); ! ! } ! ! final long tx1 = journal.newTx(); ! ! final IIndex ndx1 = journal.getIndex(name,tx1); ! ! assertNotNull(ndx1); ! ! final long tx2 = journal.newTx(); ! ! final IIndex ndx2 = journal.getIndex(name,tx2); ! ! assertTrue(tx1 != tx2); ! ! assertTrue(ndx1 != ndx2); ! ! assertNotNull(ndx2); ! ! assertTrue( ndx1 == journal.getIndex(name,tx1)); ! ! assertTrue( ndx2 == journal.getIndex(name,tx2)); ! ! journal.closeAndDelete(); } *************** *** 161,167 **** public void test_readIsolation() { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); final String name = "abc"; --- 201,205 ---- public void test_readIsolation() { ! Journal journal = new Journal(getProperties()); final String name = "abc"; *************** *** 257,261 **** journal.abort(tx2); ! journal.close(); } --- 295,299 ---- journal.abort(tx2); ! journal.closeAndDelete(); } *************** *** 271,277 **** public void test_writeIsolation() { ! Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); final String name = "abc"; --- 309,313 ---- public void test_writeIsolation() { ! Journal journal = new Journal(getProperties()); final String name = "abc"; *************** *** 375,379 **** } ! journal.close(); } --- 411,415 ---- } ! journal.closeAndDelete(); } *************** *** 393,399 **** public void test_delete001() { ! final Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 429,433 ---- public void test_delete001() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 469,473 **** assertTrue(journal.getIndex(name).contains(id0)); ! journal.close(); } --- 503,507 ---- assertTrue(journal.getIndex(name).contains(id0)); ! journal.closeAndDelete(); } *************** *** 484,490 **** public void test_delete002() { ! final Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 518,522 ---- public void test_delete002() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 577,581 **** assertFalse(journal.getIndex(name).contains(id0)); ! journal.close(); } --- 609,613 ---- assertFalse(journal.getIndex(name).contains(id0)); ! journal.closeAndDelete(); } *************** *** 593,599 **** public void test_delete003() { ! final Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); String name = "abc"; --- 625,629 ---- public void test_delete003() { ! Journal journal = new Journal(getProperties()); String name = "abc"; *************** *** 665,669 **** assertEquals(v1, (byte[])journal.getIndex(name).lookup(id0)); ! journal.close(); } --- 695,699 ---- assertEquals(v1, (byte[])journal.getIndex(name).lookup(id0)); ! journal.closeAndDelete(); } *************** *** 1124,1130 **** public void test_commit_noConflict01() { ! final Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); final String name = "abc"; --- 1154,1158 ---- public void test_commit_noConflict01() { ! Journal journal = new Journal(getProperties()); final String name = "abc"; *************** *** 1204,1208 **** id1)); ! journal.close(); } --- 1232,1236 ---- id1)); ! journal.closeAndDelete(); } *************** *** 1214,1220 **** public void test_deletePreExistingVersion_noConflict() { ! final Properties properties = getProperties(); ! ! Journal journal = new Journal(properties); final String name = "abc"; --- 1242,1246 ---- public void test_deletePreExistingVersion_noConflict() { ! Journal journal = new Journal(getProperties()); final String name = "abc"; *************** *** 1269,1273 **** assertFalse(journal.getIndex(name).contains(id0)); ! journal.close(); } --- 1295,1299 ---- assertFalse(journal.getIndex(name).contains(id0)); ! journal.closeAndDelete(); } Index: TestMappedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestMappedJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** TestMappedJournal.java 21 Feb 2007 20:17:20 -0000 1.8 --- TestMappedJournal.java 22 Feb 2007 16:59:34 -0000 1.9 *************** *** 115,121 **** properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); ! properties.setProperty(Options.SEGMENT, "0"); ! properties.setProperty(Options.FILE,getTestJournalFile(properties)); return properties; --- 115,121 ---- properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); ! // properties.setProperty(Options.SEGMENT, "0"); ! // properties.setProperty(Options.FILE,getTestJournalFile(properties)); return properties; *************** *** 132,164 **** final Properties properties = getProperties(); - - try { - - Journal journal = new Journal(properties); ! MappedBufferStrategy bufferStrategy = (MappedBufferStrategy) journal._bufferStrategy; ! ! assertTrue("isStable",bufferStrategy.isStable()); ! assertFalse("isFullyBuffered",bufferStrategy.isFullyBuffered()); ! assertEquals(Options.FILE, properties.getProperty(Options.FILE), bufferStrategy.file.toString()); ! assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, ! bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, ! bufferStrategy.getMaximumExtent()); ! assertNotNull("raf", bufferStrategy.raf); ! assertEquals("bufferMode", BufferMode.Mapped, bufferStrategy.getBufferMode()); ! assertNotNull("directBuffer", bufferStrategy.directBuffer); ! assertNotNull("mappedBuffer", bufferStrategy.mappedBuffer); ! assertTrue( "userExtent", bufferStrategy.getExtent() > bufferStrategy.getUserExtent()); ! assertEquals( "bufferCapacity", bufferStrategy.getUserExtent(), bufferStrategy.directBuffer ! .capacity()); ! ! journal.close(); ! } finally { ! ! deleteTestJournalFile(); ! ! } } --- 132,159 ---- final Properties properties = getProperties(); ! Journal journal = new Journal(properties); ! MappedBufferStrategy bufferStrategy = (MappedBufferStrategy) journal._bufferStrategy; ! ! assertTrue("isStable", bufferStrategy.isStable()); ! assertFalse("isFullyBuffered", bufferStrategy.isFullyBuffered()); ! // assertEquals(Options.FILE, properties.getProperty(Options.FILE), ! // bufferStrategy.file.toString()); ! assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, ! bufferStrategy.getInitialExtent()); ! assertEquals(Options.MAXIMUM_EXTENT, Options.DEFAULT_MAXIMUM_EXTENT, ! bufferStrategy.getMaximumExtent()); ! assertNotNull("raf", bufferStrategy.raf); ! assertEquals("bufferMode", BufferMode.Mapped, bufferStrategy ! .getBufferMode()); ! assertNotNull("directBuffer", bufferStrategy.directBuffer); ! assertNotNull("mappedBuffer", bufferStrategy.mappedBuffer); ! assertTrue("userExtent", bufferStrategy.getExtent() > bufferStrategy ! .getUserExtent()); ! assertEquals("bufferCapacity", bufferStrategy.getUserExtent(), ! bufferStrategy.directBuffer.capacity()); ! ! journal.closeAndDelete(); } Index: TestTxJournalProtocol.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTxJournalProtocol.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** TestTxJournalProtocol.java 17 Feb 2007 21:34:13 -0000 1.1 --- TestTxJournalProtocol.java 22 Feb 2007 16:59:34 -0000 1.2 *************** *** 75,112 **** */ public void test_duplicateTransactionIdentifiers01() throws IOException { ! ! final Properties properties = getProperties(); try { ! Journal journal = new Journal(properties); ! Tx tx0 = new Tx(journal,0); ! try { ! // Try to create another transaction with the same identifier. ! new Tx(journal,0); ! ! fail( "Expecting: "+IllegalStateException.class); ! ! } ! ! catch( IllegalStateException ex ) { ! ! System.err.println("Ignoring expected exception: "+ex); ! ! } ! ! tx0.abort(); ! ! journal.close(); ! } finally { ! deleteTestJournalFile(); - } - } --- 75,102 ---- */ ... [truncated message content] |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:39
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/java/com/bigdata/objndx Modified Files: IndexSegmentFileStore.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: IndexSegmentFileStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/objndx/IndexSegmentFileStore.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** IndexSegmentFileStore.java 21 Feb 2007 20:17:21 -0000 1.7 --- IndexSegmentFileStore.java 22 Feb 2007 16:59:35 -0000 1.8 *************** *** 123,126 **** --- 123,132 ---- } + public File getFile() { + + return file; + + } + public void close() { *************** *** 141,144 **** --- 147,162 ---- } + + public void closeAndDelete() { + + close(); + + if(!file.delete()) { + + System.err.println("WARN: Could not delete: "+file.getAbsolutePath()); + + } + + } public long write(ByteBuffer data) { |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:39
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16247/src/java/com/bigdata/rawstore Modified Files: SimpleMemoryRawStore.java SimpleFileRawStore.java IRawStore.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: SimpleFileRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore/SimpleFileRawStore.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** SimpleFileRawStore.java 21 Feb 2007 20:17:21 -0000 1.4 --- SimpleFileRawStore.java 22 Feb 2007 16:59:35 -0000 1.5 *************** *** 134,137 **** --- 134,143 ---- } + public File getFile() { + + return file; + + } + /** * This also releases the lock if any obtained by the constructor. *************** *** 155,158 **** --- 161,176 ---- } + public void closeAndDelete() { + + close(); + + if(!file.delete()) { + + System.err.println("WARN: Could not delete: "+file.getAbsolutePath()); + + } + + } + public ByteBuffer read(long addr) { Index: IRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore/IRawStore.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** IRawStore.java 21 Feb 2007 20:17:21 -0000 1.4 --- IRawStore.java 22 Feb 2007 16:59:35 -0000 1.5 *************** *** 48,51 **** --- 48,52 ---- package com.bigdata.rawstore; + import java.io.File; import java.nio.ByteBuffer; *************** *** 211,214 **** --- 212,226 ---- /** + * Closes the store immediately and releases its persistent resources. + */ + public void closeAndDelete(); + + /** + * The backing file -or- <code>null</code> if there is no backing file + * for the store. + */ + public File getFile(); + + /** * True iff backed by stable storage. * Index: SimpleMemoryRawStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/rawstore/SimpleMemoryRawStore.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** SimpleMemoryRawStore.java 21 Feb 2007 20:17:21 -0000 1.3 --- SimpleMemoryRawStore.java 22 Feb 2007 16:59:35 -0000 1.4 *************** *** 48,51 **** --- 48,52 ---- package com.bigdata.rawstore; + import java.io.File; import java.nio.ByteBuffer; import java.util.ArrayList; *************** *** 136,139 **** --- 137,149 ---- } + /** + * This always returns <code>null</code>. + */ + public File getFile() { + + return null; + + } + public void close() { *************** *** 147,150 **** --- 157,166 ---- } + public void closeAndDelete() { + + close(); + + } + public ByteBuffer read(long addr) { |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:12
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/rio In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16203/src/test/com/bigdata/rdf/rio Modified Files: TestRioIntegration.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: TestRioIntegration.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/rio/TestRioIntegration.java,v retrieving revision 1.14 retrieving revision 1.15 diff -C2 -d -r1.14 -r1.15 *** TestRioIntegration.java 12 Feb 2007 21:51:43 -0000 1.14 --- TestRioIntegration.java 22 Feb 2007 16:58:59 -0000 1.15 *************** *** 221,224 **** --- 221,226 ---- } // next source to load. + store.closeAndDelete(); + // long elapsed = System.currentTimeMillis() - begin; // |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:06
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16203/src/test/com/bigdata/rdf Modified Files: AbstractTripleStoreTestCase.java TestRestartSafe.java TestInsertRateStore.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: TestRestartSafe.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/TestRestartSafe.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TestRestartSafe.java 11 Feb 2007 17:34:23 -0000 1.7 --- TestRestartSafe.java 22 Feb 2007 16:58:59 -0000 1.8 *************** *** 268,271 **** --- 268,273 ---- assertEquals(bn2,store.getTerm(bn2_id)); + store.closeAndDelete(); + } Index: TestInsertRateStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/TestInsertRateStore.java,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -d -r1.4 -r1.5 *** TestInsertRateStore.java 11 Feb 2007 17:34:23 -0000 1.4 --- TestInsertRateStore.java 22 Feb 2007 16:58:59 -0000 1.5 *************** *** 548,551 **** --- 548,553 ---- store.commit(); + store.closeAndDelete(); + long elapsed = System.currentTimeMillis() - begin; *************** *** 567,571 **** w.close(); ! } --- 569,573 ---- w.close(); ! } Index: AbstractTripleStoreTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/AbstractTripleStoreTestCase.java,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** AbstractTripleStoreTestCase.java 12 Feb 2007 21:51:43 -0000 1.10 --- AbstractTripleStoreTestCase.java 22 Feb 2007 16:58:59 -0000 1.11 *************** *** 166,170 **** public void tearDown() { ! if(store.isOpen()) store.close(); } --- 166,170 ---- public void tearDown() { ! if(store.isOpen()) store.closeAndDelete(); } |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:06
|
Update of /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16203/src/test/com/bigdata/rdf/inf Modified Files: TestFullForwardClosure.java TestMagicSets.java AbstractInferenceEngineTestCase.java Removed Files: TestTempStore.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: AbstractInferenceEngineTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf/AbstractInferenceEngineTestCase.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** AbstractInferenceEngineTestCase.java 17 Feb 2007 03:08:00 -0000 1.5 --- AbstractInferenceEngineTestCase.java 22 Feb 2007 16:58:58 -0000 1.6 *************** *** 149,153 **** public void tearDown() { ! store.close(); } --- 149,154 ---- public void tearDown() { ! if (store.isOpen()) ! store.closeAndDelete(); } Index: TestFullForwardClosure.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf/TestFullForwardClosure.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** TestFullForwardClosure.java 6 Feb 2007 23:06:44 -0000 1.3 --- TestFullForwardClosure.java 22 Feb 2007 16:58:58 -0000 1.4 *************** *** 93,96 **** --- 93,98 ---- store.commit(); + store.closeAndDelete(); + } Index: TestMagicSets.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/test/com/bigdata/rdf/inf/TestMagicSets.java,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** TestMagicSets.java 17 Feb 2007 03:08:00 -0000 1.5 --- TestMagicSets.java 22 Feb 2007 16:58:58 -0000 1.6 *************** *** 235,238 **** --- 235,240 ---- assertTrue(answerSet.containsStatement(z, rdfType, A)); + store.closeAndDelete(); + } --- TestTempStore.java DELETED --- |
From: Bryan T. <tho...@us...> - 2007-02-22 16:59:03
|
Update of /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv16203/src/java/com/bigdata/rdf Modified Files: TripleStore.java Log Message: Lots of little changes to get the test suites to use temporary files and to remove them when tests complete. At present, this only helps if the test succeeds. You need to do a try {} finally {} to get the tests to remove their files on failure. Index: TripleStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata-rdf/src/java/com/bigdata/rdf/TripleStore.java,v retrieving revision 1.19 retrieving revision 1.20 diff -C2 -d -r1.19 -r1.20 *** TripleStore.java 21 Feb 2007 20:16:43 -0000 1.19 --- TripleStore.java 22 Feb 2007 16:58:59 -0000 1.20 *************** *** 140,160 **** * handle some different initialization properties.) * - * @todo Play with the branching factor again. Now that we are using overflow to - * evict data onto index segments we can use a higher branching factor and - * simply evict more often. Is this worth it? We might want a lower - * branching factor on the journal since we can not tell how large any - * given write will be and then use larger branching factors on the index - * segments. - * - * @todo try loading some very large data sets; try Transient vs Disk vs Direct - * modes. If Transient has significantly better performance then it - * indicates that we are waiting on IO so introduce AIO support in the - * Journal and try Disk vs Direct with aio. Otherwise, consider - * refactoring the btree to have the values be variable length byte[]s - * with serialization in the client and other tuning focused on IO (the - * only questions with that approach are appropriate compression - * techniques and handling transparently timestamps as part of the value - * when using an isolated btree in a transaction). - * * @todo the only added cost for a quad store is the additional statement * indices. There are only three more statement indices in a quad store. --- 140,143 ---- *************** *** 169,176 **** * identifiers). * ! * @todo The use of long[] identifiers for statements also means that the SPO ! * and other statement indices are only locally ordered so they can not be * used to perform a range scan that is ordered in the terms without ! * joining against the various term indices and then sorting the outputs. * * @todo possibly save frequently seen terms in each batch for the next batch in --- 152,159 ---- * identifiers). * ! * @todo The use of term identifiers for statements means that the SPO and other ! * statement indices are only locally ordered and that they can not be * used to perform a range scan that is ordered in the terms without ! * joining against the term index and/or sorting the outputs. * * @todo possibly save frequently seen terms in each batch for the next batch in *************** *** 191,253 **** * add the metadata to the terms until we have the statement. * ! * @todo Note that a very interesting solution for RDF places all data into a ! * statement index and then use block compression techniques to remove ! * frequent terms, e.g., the repeated parts of the value. Also note that ! * there will be no "value" for an rdf statement since existence is all. ! * The key completely encodes the statement. So, another approach is to ! * bit code the repeated substrings found within the key in each leaf. * This way the serialized key size reflects only the #of distinctions. * ! * @todo I've been thinking about rdfs stores in the light of the work on ! * bigdata. Transactional isolation for rdf is really quite simple. Since ! * lexicons (uri, literal or bnode indices) do not (really) support ! * deletion, the only acts are asserting term and asserting and retracting ! * statements. since assertion terms can lead to write-write conflicts, ! * which must be resolved and can cascade into the statement indices since ! * the statement key depends directly on the assigned term identifiers. a ! * statement always merges with an existing statement, inserts never cause ! * conflicts. Hence the only possible write-write conflict for the ! * statement indices is a write-delete conflict. quads do not really make ! * this more complex (or expensive) since merges only occur when there is ! * a context match. however entailments can cause twists depending on how ! * they are realized. ! * ! * If we do a pure RDF layer (vs RDF over GOM over bigdata), then it seems that ! * we could simple use a statement index (no lexicons for URIs, etc). Normally ! * this inflates the index size since you have lots of duplicate strings, but we ! * could just use block compression to factor out those strings when we evict ! * index leaves to disk. Prefix compression of keys will already do great things ! * for removing repetitive strings from the index nodes and block compression ! * will get at the leftover redundancy. ! * ! * So, one dead simple architecture is one index per access path (there is of ! * course some index reuse across the access paths) with the statements inline ! * in the index using prefix key compression and block compression to remove ! * redundancy. Inserts on this architecture would just send triples to the store ! * and the various indices would be maintained by the store itself. Those ! * indices could be load balanced in segments across a cluster. ! * ! * Since a read that goes through to disk reads an entire leaf at a time, the ! * most obvious drawback that I see is caching for commonly used assertions, but ! * that is easy to implement with some cache invalidation mechanism coupled to ! * deletes. ! * ! * I can also see how to realize very large bulk inserts outside of a ! * transactional context while handling concurrent transactions -- you just have ! * to reconcile as of the commit time of the bulk insert and you get to do that ! * using efficient compacting sort-merges of "perfect" bulk index segments. The ! * architecture would perform well on concurrent apstars style document loading ! * as well as what we might normally consider a bulk load (a few hundred ! * megabytes of data) within the normal transaction mechanisms, but if you ! * needed to ingest uniprot you would want to use a different technique :-) ! * outside of the normal transactional isolation mechanisms. ! * ! * I'm not sure what the right solution is for entailments, e.g., truth ! * maintenance vs eager closure. Either way, you would definitely want to avoid ! * tuple at a time processing and batch things up so as to minimize the #of ! * index tests that you had to do. So, handling entailments and efficient joins ! * for high-level query languages would be the two places for more thought. And ! * there are little odd spots in RDF - handling bnodes, typed literals, and the ! * concept of a total sort order for the statement index. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 174,196 ---- * add the metadata to the terms until we have the statement. * ! * @todo Bit code the repeated substrings found within the key in each leaf. * This way the serialized key size reflects only the #of distinctions. * ! * @todo Very large bulk inserts outside of a transactional context while ! * handling concurrent transactions. Reconcile as of the commit time of ! * the bulk insert and you get to do that using efficient compacting ! * sort-merges of "perfect" bulk index segments. The architecture would ! * perform well on concurrent apstars style document loading as well as ! * what we might normally consider a bulk load (a few hundred megabytes of ! * data) within the normal transaction mechanisms, but if you needed to ! * ingest uniprot you would want to use a different technique :-) outside ! * of the normal transactional isolation mechanisms. ! * <P> ! * I'm not sure what the right solution is for entailments, e.g., truth ! * maintenance vs eager closure. Either way, you would definitely want to ! * avoid tuple at a time processing and batch things up so as to minimize ! * the #of index tests that you had to do. So, handling entailments and ! * efficient joins for high-level query languages would be the two places ! * for more thought. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> |
From: Bryan T. <tho...@us...> - 2007-02-21 20:17:58
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/rawstore In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/test/com/bigdata/rawstore Modified Files: AbstractRawStoreTestCase.java Log Message: Further work supporting transactional isolation. Index: AbstractRawStoreTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/rawstore/AbstractRawStoreTestCase.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** AbstractRawStoreTestCase.java 9 Feb 2007 16:13:17 -0000 1.1 --- AbstractRawStoreTestCase.java 21 Feb 2007 20:17:20 -0000 1.2 *************** *** 93,102 **** * @param actual * Buffer. */ ! public void assertEquals(byte[] expected, ByteBuffer actual ) { if( expected == null ) throw new IllegalArgumentException(); if( actual == null ) fail("actual is null"); /* Create a read-only view on the buffer so that we do not mess with --- 93,113 ---- * @param actual * Buffer. + * + * @todo optimize test helper when ByteBuffer is backed by an array, but + * also compensate for the arrayOffset. */ ! static public void assertEquals(byte[] expected, ByteBuffer actual ) { if( expected == null ) throw new IllegalArgumentException(); if( actual == null ) fail("actual is null"); + + if(actual.hasArray() && actual.arrayOffset()==0) { + + assertEquals(expected,actual.array()); + + return; + + } /* Create a read-only view on the buffer so that we do not mess with *************** *** 185,189 **** try { ! store.read( 0L, null); fail("Expecting: "+IllegalArgumentException.class); --- 196,200 ---- try { ! store.read( 0L ); fail("Expecting: "+IllegalArgumentException.class); *************** *** 212,216 **** final int offset = 10; ! store.read( Addr.toLong(nbytes, offset), null); fail("Expecting: "+IllegalArgumentException.class); --- 223,227 ---- final int offset = 10; ! store.read( Addr.toLong(nbytes, offset) ); fail("Expecting: "+IllegalArgumentException.class); *************** *** 237,241 **** final int offset = 0; ! store.read( Addr.toLong(nbytes, offset), null); fail("Expecting: "+IllegalArgumentException.class); --- 248,252 ---- final int offset = 0; ! store.read( Addr.toLong(nbytes, offset) ); fail("Expecting: "+IllegalArgumentException.class); *************** *** 347,351 **** // read the data back. ! ByteBuffer actual = store.read(addr1,null); assertEquals(expected,actual); --- 358,362 ---- // read the data back. ! ByteBuffer actual = store.read(addr1); assertEquals(expected,actual); *************** *** 387,391 **** { // read the data back. ! ByteBuffer actual = store.read(addr1, null); assertEquals(expected, actual); --- 398,402 ---- { // read the data back. ! ByteBuffer actual = store.read(addr1); assertEquals(expected, actual); *************** *** 403,407 **** { // read the data back. ! ByteBuffer actual2 = store.read(addr1, null); assertEquals(expected, actual2); --- 414,418 ---- { // read the data back. ! ByteBuffer actual2 = store.read(addr1); assertEquals(expected, actual2); *************** *** 416,755 **** } ! /** ! * Test verifies read behavior when the offered buffer has exactly the ! * required #of bytes of remaining. ! */ ! public void test_writeReadWith2ndBuffer_exactCapacity() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len); ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // verify the data are record correct. ! assertEquals(expected1,actual); ! ! /* ! * the caller's buffer MUST be used since it has sufficient bytes ! * remaining ! */ ! assertTrue("Caller's buffer was not used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len,actual.limit()); ! ! } ! ! public void test_writeReadWith2ndBuffer_excessCapacity_zeroPosition() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len+1); ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // verify the data are record correct. ! assertEquals(expected1,actual); ! ! /* ! * the caller's buffer MUST be used since it has sufficient bytes ! * remaining ! */ ! assertTrue("Caller's buffer was not used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len,actual.limit()); ! ! } ! ! public void test_writeReadWith2ndBuffer_excessCapacity_nonZeroPosition() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len+2); ! buf.position(1); // advance the position by one byte. ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // copy the expected data leaving the first byte zero. ! byte[] expected2 = new byte[len+1]; ! System.arraycopy(expected1, 0, expected2, 1, expected1.length); ! ! // verify the data are record correct. ! assertEquals(expected2,actual); ! ! /* ! * the caller's buffer MUST be used since it has sufficient bytes ! * remaining ! */ ! assertTrue("Caller's buffer was not used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len+1,actual.limit()); ! ! } ! ! /** ! * Test verifies read behavior when the offered buffer does not have ! * sufficient remaining capacity. ! */ ! public void test_writeReadWith2ndBuffer_wouldUnderflow_nonZeroPosition() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer that is large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len); ! buf.position(1); // but advance the position so that there is not enough room. ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // verify the data are record correct. ! assertEquals(expected1,actual); ! ! /* ! * the caller's buffer MUST NOT be used since it does not have ! * sufficient bytes remaining. ! */ ! assertFalse("Caller's buffer was used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len,actual.limit()); ! ! } ! ! /** ! * Test verifies read behavior when the offered buffer does not have ! * sufficient remaining capacity. ! */ ! public void test_writeReadWith2ndBuffer_wouldUnderflow_zeroPosition() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer that is not large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len-1); ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // verify the data are record correct. ! assertEquals(expected1,actual); ! ! /* ! * the caller's buffer MUST NOT be used since it does not have ! * sufficient bytes remaining. ! */ ! assertFalse("Caller's buffer was used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len,actual.limit()); ! ! } ! ! /** ! * Test verifies that an oversized buffer provided to ! * {@link IRawStore#read(long, ByteBuffer)} will not cause more bytes to be ! * read than are indicated by the {@link Addr address}. ! */ ! public void test_writeReadWith2ndBuffer_wouldOverflow_zeroPosition() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer that is more than large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len+1); ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // verify the data are record correct - only [len] bytes should be copied. ! assertEquals(expected1,actual); ! ! /* ! * the caller's buffer MUST be used since it has sufficient bytes ! * remaining. ! */ ! assertTrue("Caller's buffer was used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len,actual.limit()); ! ! } ! ! /** ! * Test verifies that an oversized buffer provided to ! * {@link IRawStore#read(long, ByteBuffer)} will not cause more bytes to be ! * read than are indicated by the {@link Addr address}. ! */ ! public void test_writeReadWith2ndBuffer_wouldOverflow_nonZeroPosition() { ! ! IRawStore store = getStore(); ! ! Random r = new Random(); ! ! final int len = 100; ! ! byte[] expected1 = new byte[len]; ! ! r.nextBytes(expected1); ! ! ByteBuffer tmp = ByteBuffer.wrap(expected1); ! ! long addr1 = store.write(tmp); ! ! // verify that the position is advanced to the limit. ! assertEquals(len,tmp.position()); ! assertEquals(tmp.position(),tmp.limit()); ! ! // a buffer that is more than large enough to hold the record. ! ByteBuffer buf = ByteBuffer.allocate(len+2); ! ! // non-zero position. ! buf.position(1); ! ! // read the data, offering our buffer. ! ByteBuffer actual = store.read(addr1, buf); ! ! // copy the expected data leaving the first byte zero. ! byte[] expected2 = new byte[len+1]; ! System.arraycopy(expected1, 0, expected2, 1, expected1.length); ! ! // verify the data are record correct - only [len] bytes should be copied. ! assertEquals(expected2,actual); ! ! /* ! * the caller's buffer MUST be used since it has sufficient bytes ! * remaining. ! */ ! assertTrue("Caller's buffer was used.", actual==buf); ! ! /* ! * verify the position and limit after the read. ! */ ! assertEquals(0,actual.position()); ! assertEquals(len+1,actual.limit()); ! ! } ! /** * Test verifies that write does not permit changes to the store state by --- 427,766 ---- } ! // /** ! // * Test verifies read behavior when the offered buffer has exactly the ! // * required #of bytes of remaining. ! // */ ! // public void test_writeReadWith2ndBuffer_exactCapacity() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len); ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // verify the data are record correct. ! // assertEquals(expected1,actual); ! // ! // /* ! // * the caller's buffer MUST be used since it has sufficient bytes ! // * remaining ! // */ ! // assertTrue("Caller's buffer was not used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len,actual.limit()); ! // ! // } ! // ! // public void test_writeReadWith2ndBuffer_excessCapacity_zeroPosition() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len+1); ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // verify the data are record correct. ! // assertEquals(expected1,actual); ! // ! // /* ! // * the caller's buffer MUST be used since it has sufficient bytes ! // * remaining ! // */ ! // assertTrue("Caller's buffer was not used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len,actual.limit()); ! // ! // } ! // ! // public void test_writeReadWith2ndBuffer_excessCapacity_nonZeroPosition() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len+2); ! // buf.position(1); // advance the position by one byte. ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // copy the expected data leaving the first byte zero. ! // byte[] expected2 = new byte[len+1]; ! // System.arraycopy(expected1, 0, expected2, 1, expected1.length); ! // ! // // verify the data are record correct. ! // assertEquals(expected2,actual); ! // ! // /* ! // * the caller's buffer MUST be used since it has sufficient bytes ! // * remaining ! // */ ! // assertTrue("Caller's buffer was not used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len+1,actual.limit()); ! // ! // } ! // ! // /** ! // * Test verifies read behavior when the offered buffer does not have ! // * sufficient remaining capacity. ! // */ ! // public void test_writeReadWith2ndBuffer_wouldUnderflow_nonZeroPosition() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer that is large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len); ! // buf.position(1); // but advance the position so that there is not enough room. ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // verify the data are record correct. ! // assertEquals(expected1,actual); ! // ! // /* ! // * the caller's buffer MUST NOT be used since it does not have ! // * sufficient bytes remaining. ! // */ ! // assertFalse("Caller's buffer was used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len,actual.limit()); ! // ! // } ! // ! // /** ! // * Test verifies read behavior when the offered buffer does not have ! // * sufficient remaining capacity. ! // */ ! // public void test_writeReadWith2ndBuffer_wouldUnderflow_zeroPosition() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer that is not large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len-1); ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // verify the data are record correct. ! // assertEquals(expected1,actual); ! // ! // /* ! // * the caller's buffer MUST NOT be used since it does not have ! // * sufficient bytes remaining. ! // */ ! // assertFalse("Caller's buffer was used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len,actual.limit()); ! // ! // } ! // ! // /** ! // * Test verifies that an oversized buffer provided to ! // * {@link IRawStore#read(long, ByteBuffer)} will not cause more bytes to be ! // * read than are indicated by the {@link Addr address}. ! // */ ! // public void test_writeReadWith2ndBuffer_wouldOverflow_zeroPosition() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer that is more than large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len+1); ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // verify the data are record correct - only [len] bytes should be copied. ! // assertEquals(expected1,actual); ! // ! // /* ! // * the caller's buffer MUST be used since it has sufficient bytes ! // * remaining. ! // */ ! // assertTrue("Caller's buffer was used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len,actual.limit()); ! // ! // } ! // ! // /** ! // * Test verifies that an oversized buffer provided to ! // * {@link IRawStore#read(long, ByteBuffer)} will not cause more bytes to be ! // * read than are indicated by the {@link Addr address}. ! // */ ! // public void test_writeReadWith2ndBuffer_wouldOverflow_nonZeroPosition() { ! // ! // IRawStore store = getStore(); ! // ! // Random r = new Random(); ! // ! // final int len = 100; ! // ! // byte[] expected1 = new byte[len]; ! // ! // r.nextBytes(expected1); ! // ! // ByteBuffer tmp = ByteBuffer.wrap(expected1); ! // ! // long addr1 = store.write(tmp); ! // ! // // verify that the position is advanced to the limit. ! // assertEquals(len,tmp.position()); ! // assertEquals(tmp.position(),tmp.limit()); ! // ! // // a buffer that is more than large enough to hold the record. ! // ByteBuffer buf = ByteBuffer.allocate(len+2); ! // ! // // non-zero position. ! // buf.position(1); ! // ! // // read the data, offering our buffer. ! // ByteBuffer actual = store.read(addr1, buf); ! // ! // // copy the expected data leaving the first byte zero. ! // byte[] expected2 = new byte[len+1]; ! // System.arraycopy(expected1, 0, expected2, 1, expected1.length); ! // ! // // verify the data are record correct - only [len] bytes should be copied. ! // assertEquals(expected2,actual); ! // ! // /* ! // * the caller's buffer MUST be used since it has sufficient bytes ! // * remaining. ! // */ ! // assertTrue("Caller's buffer was used.", actual==buf); ! // ! // /* ! // * verify the position and limit after the read. ! // */ ! // assertEquals(0,actual.position()); ! // assertEquals(len+1,actual.limit()); ! // ! // } ! // /** * Test verifies that write does not permit changes to the store state by *************** *** 779,783 **** // verify read. ! assertEquals(expected1,store.read(addr1, null)); // clone the data. --- 790,794 ---- // verify read. ! assertEquals(expected1,store.read(addr1)); // clone the data. *************** *** 791,795 **** * the store. */ ! assertEquals(expected2,store.read(addr1, null)); } --- 802,806 ---- * the store. */ ! assertEquals(expected2,store.read(addr1)); } *************** *** 819,823 **** assertEquals(tmp.position(),tmp.limit()); ! ByteBuffer actual = store.read(addr1, null); assertEquals(expected1,actual); --- 830,834 ---- assertEquals(tmp.position(),tmp.limit()); ! ByteBuffer actual = store.read(addr1); assertEquals(expected1,actual); *************** *** 841,845 **** // verify no change in store state. ! assertEquals(expected1,store.read(addr1, null)); } --- 852,856 ---- // verify no change in store state. ! assertEquals(expected1,store.read(addr1)); } *************** *** 881,885 **** assertEquals(tmp.position(),tmp.limit()); ! assertEquals(expected,store.read(addr, null)); addrs[i] = addr; --- 892,896 ---- assertEquals(tmp.position(),tmp.limit()); ! assertEquals(expected,store.read(addr)); addrs[i] = addr; *************** *** 901,905 **** byte[] expected = records[order[i]]; ! assertEquals(expected,store.read(addr, null)); } --- 912,916 ---- byte[] expected = records[order[i]]; ! assertEquals(expected,store.read(addr)); } |
From: Bryan T. <tho...@us...> - 2007-02-21 20:17:34
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/java/com/bigdata/scaleup Modified Files: PartitionedJournal.java Log Message: Further work supporting transactional isolation. Index: PartitionedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/scaleup/PartitionedJournal.java,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** PartitionedJournal.java 17 Feb 2007 21:34:21 -0000 1.8 --- PartitionedJournal.java 21 Feb 2007 20:17:22 -0000 1.9 *************** *** 1153,1162 **** } public boolean isStable() { return slave.isStable(); } ! public ByteBuffer read(long addr, ByteBuffer dst) { ! return slave.read(addr, dst); } --- 1153,1174 ---- } + /** + * A partitioned journal always reports and does not allow the use + * of non-stable backing stored for the {@link SlaveJournal}. + */ public boolean isStable() { return slave.isStable(); } ! /** ! * true iff the {@link SlaveJournal} is fully buffered (this does not ! * consider the index segments). ! */ ! public boolean isFullyBuffered() { ! return slave.isFullyBuffered(); ! } ! ! public ByteBuffer read(long addr) { ! return slave.read(addr); } *************** *** 1201,1203 **** --- 1213,1239 ---- } + public void abort(long ts) { + slave.abort(ts); + } + + public long commit(long ts) { + return slave.commit(ts); + } + + public IIndex getIndex(String name, long ts) { + return slave.getIndex(name, ts); + } + + public long newReadCommittedTx() { + return slave.newReadCommittedTx(); + } + + public long newTx() { + return slave.newTx(); + } + + public long newTx(boolean readOnly) { + return slave.newTx(readOnly); + } + } |
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/journal In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/test/com/bigdata/journal Modified Files: TestDiskJournal.java AbstractRestartSafeTestCase.java TestMappedJournal.java TestTemporaryStore.java TestDirectJournal.java AbstractTestCase.java BenchmarkJournalWriteRate.java AbstractCommitRecordTestCase.java AbstractBufferStrategyTestCase.java TestTransientJournal.java TestTx.java StressTestConcurrent.java Added Files: AbstractMROWTestCase.java ComparisonTestDriver.java AbstractBTreeWithJournalTestCase.java Log Message: Further work supporting transactional isolation. --- NEW FILE: AbstractBTreeWithJournalTestCase.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Nov 17, 2006 */ package com.bigdata.journal; import java.util.Properties; import com.bigdata.objndx.AbstractBTreeTestCase; import com.bigdata.objndx.BTree; import com.bigdata.objndx.SimpleEntry; /** * Stress tests of the {@link BTree} writing on the {@link Journal}. This suite * simply contains stress tests of the btree operations at larger scale and * including incremental writes against the store. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ abstract public class AbstractBTreeWithJournalTestCase extends AbstractBTreeTestCase { public AbstractBTreeWithJournalTestCase() { } public AbstractBTreeWithJournalTestCase(String name) { super(name); } abstract public BufferMode getBufferMode(); public Properties getProperties() { if (properties == null) { properties = super.getProperties(); properties.setProperty(Options.BUFFER_MODE, getBufferMode().toString() ); } return properties; } private Properties properties; /** * Return a btree backed by a journal with the indicated branching factor. * The serializer requires that values in leaves are {@link SimpleEntry} * objects. * * @param branchingFactor * The branching factor. * * @return The btree. */ public BTree getBTree(int branchingFactor) { Properties properties = getProperties(); Journal journal = new Journal(properties); BTree btree = new BTree(journal, branchingFactor, SimpleEntry.Serializer.INSTANCE); return btree; } /* * @todo try large branching factors, but limit the total #of keys inserted * or the running time will be too long (I am using an expontential #of keys * by default). * * Note: For sequential keys, m=128 causes the journal to exceed its initial * extent. */ int[] branchingFactors = new int[]{3,4,5,10};//,20,64};//,128};//,512}; /** * A stress test for sequential key insertion that runs with a variety of * branching factors and #of keys to insert. */ public void test_splitRootLeaf_increasingKeySequence() { for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitWithIncreasingKeySequence( getBTree(m), m, m ); doSplitWithIncreasingKeySequence( getBTree(m), m, m*m ); doSplitWithIncreasingKeySequence( getBTree(m), m, m*m*m ); doSplitWithIncreasingKeySequence( getBTree(m), m, m*m*m*m ); } } /** * A stress test for sequential decreasing key insertions that runs with a * variety of branching factors and #of keys to insert. */ public void test_splitRootLeaf_decreasingKeySequence() { for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitWithDecreasingKeySequence( getBTree(m), m, m ); doSplitWithDecreasingKeySequence( getBTree(m), m, m*m ); doSplitWithDecreasingKeySequence( getBTree(m), m, m*m*m ); doSplitWithDecreasingKeySequence( getBTree(m), m, m*m*m*m ); } } /** * A stress test for random key insertion using a that runs with a variety * of branching factors and #of keys to insert. */ public void test_splitRootLeaf_randomKeySequence() { for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitWithRandomDenseKeySequence( getBTree(m), m, m ); doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m ); doSplitWithRandomDenseKeySequence( getBTree(m), m, m*m*m ); // This case overflows the default journal extent. // doSplitWithRandomKeySequence( getBTree(m), m, m*m*m*m ); } } /** * Stress test inserts random permutations of keys into btrees of order m * for several different btrees, #of keys to be inserted, and permutations * of keys. */ public void test_stress_split() { for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doSplitTest( m, 0 ); } } /** * Stress test of insert, removal and lookup of keys in the tree (allows * splitting of the root leaf). */ public void test_insertLookupRemoveKeyTreeStressTest() { int nkeys = 2000; int ntrials = 25000; for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doInsertLookupRemoveStressTest(m, nkeys, ntrials); } } /** * Stress test for building up a tree and then removing all keys in a random * order. */ public void test_stress_removeStructure() { int nkeys = 5000; for(int i=0; i<branchingFactors.length; i++) { int m = branchingFactors[i]; doRemoveStructureStressTest(m,nkeys); } } } Index: TestDiskJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestDiskJournal.java,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** TestDiskJournal.java 8 Feb 2007 21:32:08 -0000 1.6 --- TestDiskJournal.java 21 Feb 2007 20:17:20 -0000 1.7 *************** *** 91,94 **** --- 91,100 ---- suite.addTestSuite( TestRawStore.class ); + // test suite for MROW correctness. + suite.addTestSuite( TestMROW.class ); + + // test suite for btree on the journal. + suite.addTestSuite( TestBTree.class ); + /* * Pickup the basic journal test suite. This is a proxied test suite, so *************** *** 133,136 **** --- 139,143 ---- assertTrue("isStable",bufferStrategy.isStable()); + assertFalse("isFullyBuffered",bufferStrategy.isFullyBuffered()); assertEquals(Options.FILE, properties.getProperty(Options.FILE), bufferStrategy.file.toString()); assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, *************** *** 175,177 **** --- 182,232 ---- } + /** + * Test suite integration for {@link AbstractMROWTestCase}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ + public static class TestMROW extends AbstractMROWTestCase { + + public TestMROW() { + super(); + } + + public TestMROW(String name) { + super(name); + } + + protected BufferMode getBufferMode() { + + return BufferMode.Disk; + + } + + } + + /** + * Test suite integration for {@link AbstractBTreeWithJournalTestCase}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ + public static class TestBTree extends AbstractBTreeWithJournalTestCase { + + public TestBTree() { + super(); + } + + public TestBTree(String name) { + super(name); + } + + public BufferMode getBufferMode() { + + return BufferMode.Disk; + + } + + } + } Index: AbstractCommitRecordTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractCommitRecordTestCase.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** AbstractCommitRecordTestCase.java 17 Feb 2007 21:34:13 -0000 1.1 --- AbstractCommitRecordTestCase.java 21 Feb 2007 20:17:20 -0000 1.2 *************** *** 94,98 **** final long timestamp = System.currentTimeMillis(); ! final int n = ICommitRecord.MAX_ROOT_ADDRS; --- 94,101 ---- final long timestamp = System.currentTimeMillis(); ! ! // using the clock for this as well so that it is an ascending value. ! final long commitCounter = System.currentTimeMillis(); ! final int n = ICommitRecord.MAX_ROOT_ADDRS; *************** *** 107,111 **** } ! return new CommitRecord(timestamp,roots); } --- 110,114 ---- } ! return new CommitRecord(timestamp,commitCounter,roots); } --- NEW FILE: AbstractMROWTestCase.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 20, 2007 */ package com.bigdata.journal; import java.io.File; import java.nio.ByteBuffer; import java.nio.MappedByteBuffer; import java.util.Collection; import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Properties; import java.util.Random; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import com.bigdata.rawstore.Addr; import com.bigdata.util.concurrent.DaemonThreadFactory; /** * Test suite for MROW (Multiple Readers, One Writer) support. * <p> * Supporting MROW is easy for a fully buffered implementation since it need * only use a read-only view for readers. If the implementation is not fully * buffered, e.g., {@link DiskOnlyStrategy}, then it needs to serialize reads * that are not buffered. The exception as always is the * {@link MappedBufferStrategy} - since this uses the nio * {@link MappedByteBuffer} it supports concurrent readers using the same * approach as a fully buffered strategy even though data may not always reside * in memory. * * @todo This test suite could also be used to tune AIO (asynchronous IO) * support for the {@link DirectBufferStrategy} and the * {@link DiskOnlyStrategy}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ abstract public class AbstractMROWTestCase extends AbstractBufferStrategyTestCase { /** * */ public AbstractMROWTestCase() { } /** * @param name */ public AbstractMROWTestCase(String name) { super(name); } /** * Correctness/stress test verifies that the implementation supports * Multiple Readers One Writer (MROW). */ public void test_mrow() throws Exception { IBufferStrategy store = ((Journal)getStore()).getBufferStrategy(); final long timeout = 5; final int nclients = 20; final int nwrites = 10000; final int writeDelayMillis = 1; final int ntrials = 10000; final int reclen = 128; final int nreads = 100; doMROWTest(store, nwrites, writeDelayMillis, timeout, nclients, ntrials, reclen, nreads); } /** * A correctness/stress/performance test with a pool of concurrent clients * designed to verify MROW operations. If the store passes these tests, then * {@link StressTestConcurrent} is designed to reveal concurrency problems * in the higher level data structures (transaction process and especially * the indices). * * @param store * The store. * * @param nwrites * The #of records to write. * * @param writeDelayMillis * The #of milliseconds delay between writes. * * @param timeout * The timeout (seconds). * * @param nclients * The #of concurrent clients. * * @param ntrials * The #of distinct client trials to execute. * * @param reclen * The length of the random byte[] records used in the * operations. * * @param nreads * The #of operations to be performed in each transaction. */ static public void doMROWTest(IBufferStrategy store, int nwrites, long writeDelayMillis, long timeout, int nclients, int ntrials, int reclen, int nreads) throws Exception { // A single-threaded writer. ExecutorService writerExecutor = Executors .newSingleThreadExecutor(DaemonThreadFactory .defaultThreadFactory()); WriterTask writerTask = new WriterTask(store, reclen, nwrites, writeDelayMillis); /* * Pre-write 25% of the records so that clients have something to * choose from when they start running. */ final int npreWrites = nwrites/4; for( int i=0; i<npreWrites; i++) { // write a single record. writerTask.write(); } System.err.println("Pre-wrote "+npreWrites+" records"); // start the writer. writerExecutor.submit(writerTask); // Concurrent readers. ExecutorService readerExecutor = Executors.newFixedThreadPool( nclients, DaemonThreadFactory.defaultThreadFactory()); // Setup readers queue. Collection<Callable<Long>> tasks = new HashSet<Callable<Long>>(); for(int i=0; i<ntrials; i++) { tasks.add(new ReaderTask(store, writerTask, nreads)); } /* * Run the M trials on N clients. */ final long begin = System.currentTimeMillis(); // start readers. List<Future<Long>> results = readerExecutor.invokeAll(tasks, timeout, TimeUnit.SECONDS); final long elapsed = System.currentTimeMillis() - begin; // force the writer to terminate. writerExecutor.shutdownNow(); // force the reads to terminate. readerExecutor.shutdownNow(); if(!writerExecutor.awaitTermination(1, TimeUnit.SECONDS)) { System.err.println("Writer did not terminate."); } if (!readerExecutor.awaitTermination(1, TimeUnit.SECONDS)) { /* * Note: if readers do not terminate within the timeout then an * IOException MAY be reported by disk-backed stores if the store is * closed while readers are still attempting to resolve records on * disk. */ System.err.println("Reader(s) did not terminate."); } // #of records actually written. final int nwritten = writerTask.nrecs; Iterator<Future<Long>> itr = results.iterator(); int nok = 0; // #of trials that successfully committed. int ncancelled = 0; // #of trials that did not complete in time. int nerr = 0; Throwable[] errors = new Throwable[ntrials]; while(itr.hasNext()) { Future<Long> future = itr.next(); if(future.isCancelled()) { ncancelled++; continue; } try { future.get(); // ignore the return (always zero). nok++; } catch(ExecutionException ex ) { System.err.println("Not expecting: "+ex); errors[nerr++] = ex.getCause(); } } System.err.println("mode=" + store.getBufferMode() + ", #clients=" + nclients + ", ntrials=" + ntrials + ", nok=" + nok + ", ncancelled=" + ncancelled + ", nerrors=" + nerr + " in " + elapsed + "ms (" + nok * 1000 / elapsed + " reads per second); nwritten=" + nwritten); } /** * A ground truth record as generated by a {@link WriterTask}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public static class Record { public final long addr; public final byte[] data; public Record(long addr, byte[] data) { assert addr != 0L; assert data != null; this.addr = addr; this.data = data; } }; /** * Run a writer. * <p> * The writer exposes state to the readers so that they can perform reads on * written records and so that the can validate those reads against ground * truth. */ public static class WriterTask implements Callable<Integer> { private final IBufferStrategy store; private final int reclen; private final int nwrites; private final long writeDelayMillis; /** * The #of records in {@link #records}. */ private volatile int nrecs = 0; /** * The ground truth data written so far. */ private volatile Record[] records; final Random r = new Random(); /** * Returns random data that will fit in N bytes. N is choosen randomly in * 1:<i>reclen</i>. * * @return A new {@link ByteBuffer} wrapping a new <code>byte[]</code> of * random length and having random contents. */ private ByteBuffer getRandomData() { final int nbytes = r.nextInt(reclen) + 1; byte[] bytes = new byte[nbytes]; r.nextBytes(bytes); return ByteBuffer.wrap(bytes); } public WriterTask(IBufferStrategy store, int reclen, int nwrites, long writeDelayMillis) { this.store = store; this.reclen = reclen; this.nwrites = nwrites; this.writeDelayMillis = writeDelayMillis; this.records = new Record[nwrites]; } /** * Return a randomly choosen ground truth record. */ public Record getRandomGroundTruthRecord() { int index = r.nextInt(nrecs); return records[ index ]; } /** * Writes any remaining records (starts from nrecs and runs to nwrites * so we can pre-write some records first). * * @return The #of records written. */ public Integer call() throws Exception { for (int i = nrecs; i < nwrites; i++) { write(); /* * Note: it is difficult to get this task to yield such that a * large #of records are written, but not before the readers * even get a chance to start executing. You may have to adjust * this by hand for different JVM/OS combinations! */ Thread.sleep(0, 1); // Thread.yield(); // Thread.sleep(writeDelayMillis,writeDelayNanos); // long begin = System.nanoTime(); // long elapsed = 0L; // while(elapsed<1000) { // // Thread.yield(); // elapsed = System.nanoTime() - begin; // // } } System.err.println("Writer done: nwritten="+nrecs); return nrecs; } /** * Write a random record and record it in {@link #records}. */ public void write() { ByteBuffer data = getRandomData(); final long addr = store.write(data); records[nrecs] = new Record(addr, data.array()); nrecs++; } } /** * Run a reader. */ public static class ReaderTask implements Callable<Long> { private final IBufferStrategy store; private final WriterTask writer; private final int nops; final Random r = new Random(); /** * * @param store * @param writer * @param nwrites #of reads to perform. */ public ReaderTask(IBufferStrategy store, WriterTask writer, int nops) { this.store = store; this.writer = writer; this.nops = nops; } /** * Executes random reads and validates against ground truth. */ public Long call() throws Exception { // Random reads. for (int i = 0; i < nops; i++) { // Thread.yield(); Record record = writer.getRandomGroundTruthRecord(); ByteBuffer buf; if (r.nextInt(100) > 30) { buf = store.read(record.addr); } else { buf = ByteBuffer.allocate(Addr.getByteCount(record.addr)); buf = store.read(record.addr); } assertEquals(record.data, buf); } return 0L; } } /** * Correctness/stress/performance test for MROW behavior. */ public static void main(String[] args) throws Exception { // timeout in seconds. final long timeout = 10; final int nclients = 20; final int nwrites = 10000; final int writeDelayMillis = 1; final int ntrials = 100000; final int reclen = 1024; final int nreads = 100; Properties properties = new Properties(); // properties.setProperty(Options.USE_DIRECT_BUFFERS,"false"); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); // properties.setProperty(Options.USE_DIRECT_BUFFERS,"true"); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Transient.toString()); // properties.setProperty(Options.USE_DIRECT_BUFFERS,"false"); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); properties.setProperty(Options.USE_DIRECT_BUFFERS,"true"); properties.setProperty(Options.BUFFER_MODE, BufferMode.Direct.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Mapped.toString()); // properties.setProperty(Options.BUFFER_MODE, BufferMode.Disk.toString()); properties.setProperty(Options.SEGMENT, "0"); File file = File.createTempFile("bigdata", ".jnl"); file.deleteOnExit(); if(!file.delete()) fail("Could not remove temp file before test"); properties.setProperty(Options.FILE, file.toString()); Journal journal = new Journal(properties); doMROWTest(journal.getBufferStrategy(), nwrites, writeDelayMillis, timeout, nclients, ntrials, reclen, nreads); journal.shutdown(); } } Index: AbstractBufferStrategyTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractBufferStrategyTestCase.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** AbstractBufferStrategyTestCase.java 15 Feb 2007 22:01:18 -0000 1.2 --- AbstractBufferStrategyTestCase.java 21 Feb 2007 20:17:20 -0000 1.3 *************** *** 59,65 **** /** * * @todo write tests for ! * {@link IBufferStrategy#transferTo(java.io.RandomAccessFile)}. This * code is currently getting "checked" by the {@link IndexSegmentBuilder}. * --- 59,67 ---- /** + * Base class for writing test cases for the different {@link IBufferStrategy} + * implementations. * * @todo write tests for ! * {@link IBufferStrategy#transferTo(java.io.RandomAccessFile)}. This * code is currently getting "checked" by the {@link IndexSegmentBuilder}. * *************** *** 271,275 **** assertEquals("userExtent",userExtent, bufferStrategy.getUserExtent()); ! assertEquals(b, bufferStrategy.read(addr, null)); /* --- 273,277 ---- assertEquals("userExtent",userExtent, bufferStrategy.getUserExtent()); ! assertEquals(b, bufferStrategy.read(addr)); /* *************** *** 296,303 **** // verify data written before we overflowed the buffer. ! assertEquals(b, bufferStrategy.read(addr, null)); // verify data written after we overflowed the buffer. ! assertEquals(b2, bufferStrategy.read(addr2, null)); store.close(); --- 298,305 ---- // verify data written before we overflowed the buffer. ! assertEquals(b, bufferStrategy.read(addr)); // verify data written after we overflowed the buffer. ! assertEquals(b2, bufferStrategy.read(addr2)); store.close(); --- NEW FILE: ComparisonTestDriver.java --- /** The Notice below must appear in each file of the Source Code of any copy you distribute of the Licensed Product. Contributors to any Modifications may add their own copyright notices to identify their own contributions. License: The contents of this file are subject to the CognitiveWeb Open Source License Version 1.1 (the License). You may not copy or use this file, in either source code or executable form, except in compliance with the License. You may obtain a copy of the License from http://www.CognitiveWeb.org/legal/license/ Software distributed under the License is distributed on an AS IS basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Copyrights: Portions created by or assigned to CognitiveWeb are Copyright (c) 2003-2003 CognitiveWeb. All Rights Reserved. Contact information for CognitiveWeb is available at http://www.CognitiveWeb.org Portions Copyright (c) 2002-2003 Bryan Thompson. Acknowledgements: Special thanks to the developers of the Jabber Open Source License 1.0 (JOSL), from which this License was derived. This License contains terms that differ from JOSL. Special thanks to the CognitiveWeb Open Source Contributors for their suggestions and support of the Cognitive Web. Modifications: */ /* * Created on Feb 21, 2007 */ package com.bigdata.journal; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; import java.util.List; import java.util.Properties; import com.bigdata.journal.StressTestConcurrent.TestOptions; import com.bigdata.rawstore.Bytes; /** * A harness for running comparison of different journal configurations. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class ComparisonTestDriver { /** * Interface for tests that can be run by {@link ComparisonTestDriver}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public static interface IComparisonTest { /** * Run a test. * * @param properties * The properties used to configure the test. * * @return The test result to report. */ public String doComparisonTest(Properties properties) throws Exception; } /** * A name-value pair used to override {@link Properties} for a {@link Condition}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ protected static class NV { public final String name; public final String value; public NV(String name,Object value) { this.name = name; this.value = value.toString(); } public int hashCode() { return name.hashCode(); } public boolean equals(NV o) { return name.equals(o.name) && value.equals(o.value); } } /** * An experimental condition. * * @todo record the results. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ protected static class Condition { public final String name; public final Properties properties; public String result; public Condition(String name,Properties properties) { this.name = name; this.properties = properties; } } /** * Return a {@link Properties} object that inherits defaults from * <i>properties</i> and sets/overrides properties identified in <i>entries</i>. * * @param properties * The inherited properties (this object is NOT modified). * @param entries * The overriden properties. * * @return A new {@link Properties}. */ protected static Condition getCondition(Properties properties, NV[] entries) throws IOException { properties = new Properties(properties); StringBuilder sb = new StringBuilder(); // sb.append("{"); for(int i=0; i<entries.length; i++) { if(i>0) sb.append("; "); // Note: Not a comma since CSV delimited. sb.append(entries[i].name+"="+entries[i].value); properties.setProperty(entries[i].name,entries[i].value); } // sb.append("}"); String name = sb.toString(); /* * Create a temporary file for the journal. Note that you must delete * the temporary file before starting the journal since it is empty and * just a placeholder for a unique filename. */ File file = File.createTempFile("bigdata", ".jnl"); file.deleteOnExit(); properties.setProperty(Options.FILE, file.toString()); return new Condition(name,properties); } static public List<Condition> getBasicConditions(Properties properties, NV[] params) throws Exception { properties = new Properties(properties); for(int i=0; i<params.length; i++) { properties.setProperty(params[i].name,params[i].value); } Condition[] conditions = new Condition[] { // getCondition(properties, new NV[] { // new NV(Options.BUFFER_MODE, BufferMode.Transient), // }), // // getCondition( // properties, // new NV[] { // // new NV(Options.BUFFER_MODE, // BufferMode.Transient), // // new NV(Options.USE_DIRECT_BUFFERS, Boolean.TRUE) // // }), // getCondition(properties, new NV[] { // new NV(Options.BUFFER_MODE, BufferMode.Direct), // }), // // getCondition( // properties, // new NV[] { // // new NV(Options.BUFFER_MODE, BufferMode.Direct), // // new NV(Options.USE_DIRECT_BUFFERS, Boolean.TRUE) // // }), // getCondition(properties, new NV[] { // new NV(Options.BUFFER_MODE, BufferMode.Direct), // new NV(Options.FORCE_ON_COMMIT, ForceEnum.No) // }), // getCondition(properties, new NV[] { // new NV(Options.BUFFER_MODE, BufferMode.Disk), // }), // getCondition(properties, new NV[] { // new NV(Options.BUFFER_MODE, BufferMode.Disk), // new NV(Options.FORCE_ON_COMMIT, ForceEnum.No) // }), // }; return Arrays.asList(conditions); } /** * Runs a comparison of various an {@link IComparisonTest} under various * conditions and writes out a summary of the reported results. * * @param args * The name of a class that implements IComparisonTest. * * @todo this is not really parameterized very well for the className since * some of the options are specific to the test class. * * @todo it would be nice to factor out the column names for the results. * Perhaps change from a String result to a NVPair<String,String> and * add something to {@link IComparisonTest} to declare the headings? * * @todo Optional name of a properties file to be read. the properties will * be used as the basis for all conditions. */ public static void main(String[] args) throws Exception { String className = args[0]; if(className==null) { className = StressTestConcurrent.class.getName(); } File outFile = new File(className+".comparison.csv"); if(outFile.exists()) throw new IOException("File exists: "+outFile.getAbsolutePath()); Properties properties = new Properties(); // force delete of the files on close of the journal under test. properties.setProperty(Options.DELETE_ON_CLOSE,"true"); properties.setProperty(Options.SEGMENT, "0"); // avoids journal overflow when running out to 60 seconds. properties.setProperty(Options.MAXIMUM_EXTENT, ""+Bytes.megabyte32*400); properties.setProperty(TestOptions.TIMEOUT,"30"); List<Condition>conditions = new ArrayList<Condition>(); conditions.addAll(getBasicConditions(properties, new NV[] { new NV( TestOptions.NCLIENTS, "1") })); conditions.addAll(getBasicConditions(properties, new NV[] { new NV( TestOptions.NCLIENTS, "2") })); conditions.addAll(getBasicConditions(properties, new NV[] { new NV( TestOptions.NCLIENTS, "10") })); conditions.addAll(getBasicConditions(properties, new NV[] { new NV( TestOptions.NCLIENTS, "20") })); conditions.addAll(getBasicConditions(properties, new NV[] { new NV( TestOptions.NCLIENTS, "100") })); conditions.addAll(getBasicConditions(properties, new NV[] { new NV( TestOptions.NCLIENTS, "200") })); // properties.setProperty(Options.BUFFER_MODE, // BufferMode.Mapped.toString()); final int nconditions = conditions.size(); { FileWriter writer = new FileWriter(outFile); System.err.println("Running comparison of " + nconditions + " conditions for " + className); Class cl = Class.forName(className); Iterator<Condition> itr = conditions.iterator(); while (itr.hasNext()) { Condition condition = itr.next(); File file = new File(condition.properties .getProperty(Options.FILE)); if (!file.delete()) throw new AssertionError( "Could not remove temp file before test"); IComparisonTest test = (IComparisonTest) cl.newInstance(); System.err.println("Running: "+ condition.name); try { condition.result = test .doComparisonTest(condition.properties); } catch (Exception ex) { condition.result = ex.getMessage(); } System.err.println(condition.result + ", " + condition.name); writer.write(condition.result + ", " + condition.name+"\n"); } writer.flush(); writer.close(); } { System.err.println("Result summary:"); Iterator<Condition> itr = conditions.iterator(); while (itr.hasNext()) { Condition condition = itr.next(); System.err.println(condition.result + ", " + condition.name); } } } } Index: TestTx.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTx.java,v retrieving revision 1.17 retrieving revision 1.18 diff -C2 -d -r1.17 -r1.18 *** TestTx.java 19 Feb 2007 19:00:18 -0000 1.17 --- TestTx.java 21 Feb 2007 20:17:20 -0000 1.18 *************** *** 421,425 **** */ final byte[] id0 = new byte[] { 1 }; ! final byte[] v0 = getRandomData(journal).array(); journal.getIndex(name,tx0).insert(id0, v0); assertEquals(v0, journal.getIndex(name,tx0).lookup(id0)); --- 421,425 ---- */ final byte[] id0 = new byte[] { 1 }; ! final byte[] v0 = getRandomData().array(); journal.getIndex(name,tx0).insert(id0, v0); assertEquals(v0, journal.getIndex(name,tx0).lookup(id0)); *************** *** 445,449 **** * Test write after delete (succeeds, returning null). */ ! final byte[] v1 = getRandomData(journal).array(); assertNull(journal.getIndex(name,tx0).insert(id0, v1)); --- 445,449 ---- * Test write after delete (succeeds, returning null). */ ! final byte[] v1 = getRandomData().array(); assertNull(journal.getIndex(name,tx0).insert(id0, v1)); *************** *** 512,516 **** */ final byte[] id0 = new byte[] { 1 }; ! final byte[] v0 = getRandomData(journal).array(); journal.getIndex(name,tx0).insert(id0, v0); assertEquals(v0, journal.getIndex(name,tx0).lookup(id0)); --- 512,516 ---- */ final byte[] id0 = new byte[] { 1 }; ! final byte[] v0 = getRandomData().array(); journal.getIndex(name,tx0).insert(id0, v0); assertEquals(v0, journal.getIndex(name,tx0).lookup(id0)); *************** *** 536,540 **** * Test write after delete (succeeds, returning null). */ ! final byte[] v1 = getRandomData(journal).array(); assertNull(journal.getIndex(name,tx0).insert(id0, v1)); --- 536,540 ---- * Test write after delete (succeeds, returning null). */ ! final byte[] v1 = getRandomData().array(); assertNull(journal.getIndex(name,tx0).insert(id0, v1)); *************** *** 621,625 **** */ final byte[] id0 = new byte[] { 1 }; ! final byte[] v0 = getRandomData(journal).array(); journal.getIndex(name,tx0).insert(id0, v0); assertEquals(v0, journal.getIndex(name,tx0).lookup(id0)); --- 621,625 ---- */ final byte[] id0 = new byte[] { 1 }; ! final byte[] v0 = getRandomData().array(); journal.getIndex(name,tx0).insert(id0, v0); assertEquals(v0, journal.getIndex(name,tx0).lookup(id0)); *************** *** 640,644 **** * write(id0,v1) in tx1. */ ! final byte[] v1 = getRandomData(journal).array(); assertNull(journal.getIndex(name,tx1).insert(id0, v1)); --- 640,644 ---- * write(id0,v1) in tx1. */ ! final byte[] v1 = getRandomData().array(); assertNull(journal.getIndex(name,tx1).insert(id0, v1)); *************** *** 707,711 **** // // Write a random data version for id 0. // final int id0 = 1; ! // final ByteBuffer expected_id0_v0 = getRandomData(journal); // journal.write( id0, expected_id0_v0); // assertEquals(expected_id0_v0.array(),journal.read( id0, null)); --- 707,711 ---- // // Write a random data version for id 0. // final int id0 = 1; ! // final ByteBuffer expected_id0_v0 = getRandomData(); // journal.write( id0, expected_id0_v0); // assertEquals(expected_id0_v0.array(),journal.read( id0, null)); *************** *** 734,738 **** // * NOT be visible to either transaction. // */ ! // final ByteBuffer expected_id0_v1 = getRandomData(journal); // journal.write( id0, expected_id0_v1); //// final ISlotAllocation slots_v1 = journal.objectIndex.getSlots(0); --- 734,738 ---- // * NOT be visible to either transaction. // */ ! // final ByteBuffer expected_id0_v1 = getRandomData(); // journal.write( id0, expected_id0_v1); //// final ISlotAllocation slots_v1 = journal.objectIndex.getSlots(0); *************** *** 768,772 **** // * show up either on the journal or in tx1. // */ ! // final ByteBuffer expected_tx1_id0_v0 = getRandomData(journal); // tx1.write(id0, expected_tx1_id0_v0); // assertDeleted(journal, id0); --- 768,772 ---- // * show up either on the journal or in tx1. // */ ! // final ByteBuffer expected_tx1_id0_v0 = getRandomData(); // tx1.write(id0, expected_tx1_id0_v0); // assertDeleted(journal, id0); *************** *** 780,784 **** // * show up either on the journal or in tx1. // */ ! // final ByteBuffer expected_tx0_id0_v0 = getRandomData(journal); // tx0.write(id0, expected_tx0_id0_v0); // assertDeleted(journal, id0); --- 780,784 ---- // * show up either on the journal or in tx1. // */ ! // final ByteBuffer expected_tx0_id0_v0 = getRandomData(); // tx0.write(id0, expected_tx0_id0_v0); // assertDeleted(journal, id0); *************** *** 789,793 **** // * Write a 2nd version on tx0 and reverify. // */ ! // final ByteBuffer expected_tx0_id0_v1 = getRandomData(journal); // tx0.write(id0, expected_tx0_id0_v1); // assertDeleted(journal, id0); --- 789,793 ---- // * Write a 2nd version on tx0 and reverify. // */ ! // final ByteBuffer expected_tx0_id0_v1 = getRandomData(); // tx0.write(id0, expected_tx0_id0_v1); // assertDeleted(journal, id0); *************** *** 798,802 **** // * Write a 2nd version on tx1 and reverify. // */ ! // final ByteBuffer expected_tx1_id0_v1 = getRandomData(journal); // tx1.write(id0, expected_tx1_id0_v1); // assertDeleted(journal, id0); --- 798,802 ---- // * Write a 2nd version on tx1 and reverify. // */ ! // final ByteBuffer expected_tx1_id0_v1 = getRandomData(); // tx1.write(id0, expected_tx1_id0_v1); // assertDeleted(journal, id0); *************** *** 815,819 **** // * Write a 3rd version on tx0 and reverify. // */ ! // final ByteBuffer expected_tx0_id0_v2 = getRandomData(journal); // tx0.write(id0, expected_tx0_id0_v2); // assertDeleted(journal, id0); --- 815,819 ---- // * Write a 3rd version on tx0 and reverify. // */ ! // final ByteBuffer expected_tx0_id0_v2 = getRandomData(); // tx0.write(id0, expected_tx0_id0_v2); // assertDeleted(journal, id0); *************** *** 876,890 **** // // // pre-existing version of id0. ! // final ByteBuffer expected_preExistingVersion = getRandomData(journal); // // // Two versions of id0 written during tx0. ! // final ByteBuffer expected0v0 = getRandomData(journal); ! // final ByteBuffer expected0v1 = getRandomData(journal); // // // Three versions of id1. // final int id1 = 2; ! // final ByteBuffer expected1v0 = getRandomData(journal); ! // final ByteBuffer expected1v1 = getRandomData(journal); ! // final ByteBuffer expected1v2 = getRandomData(journal); // // // Write pre-existing version of id0 onto the journal. --- 876,890 ---- // // // pre-existing version of id0. ! // final ByteBuffer expected_preExistingVersion = getRandomData(); // // // Two versions of id0 written during tx0. ! // final ByteBuffer expected0v0 = getRandomData(); ! // final ByteBuffer expected0v1 = getRandomData(); // // // Three versions of id1. // final int id1 = 2; ! // final ByteBuffer expected1v0 = getRandomData(); ! // final ByteBuffer expected1v1 = getRandomData(); ! // final ByteBuffer expected1v2 = getRandomData(); // // // Write pre-existing version of id0 onto the journal. *************** *** 1152,1156 **** final byte[] id1 = new byte[]{1}; ! final byte[] v0 = getRandomData(journal).array(); // write data version on tx1 --- 1152,1156 ---- final byte[] id1 = new byte[]{1}; ! final byte[] v0 = getRandomData().array(); // write data version on tx1 *************** *** 1230,1234 **** final byte[] id0 = new byte[]{1}; ! final byte[] v0 = getRandomData(journal).array(); // data version not visible in global scope. --- 1230,1234 ---- final byte[] id0 = new byte[]{1}; ! final byte[] v0 = getRandomData().array(); // data version not visible in global scope. *************** *** 1312,1317 **** // // // create random data for versions. ! // ByteBuffer expected_id0_v0 = getRandomData(journal); ! // ByteBuffer expected_id0_v1 = getRandomData(journal); // // // data version not visible in global scope. --- 1312,1317 ---- // // // create random data for versions. ! // ByteBuffer expected_id0_v0 = getRandomData(); ! // ByteBuffer expected_id0_v1 = getRandomData(); // // // data version not visible in global scope. Index: TestMappedJournal.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestMappedJournal.java,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** TestMappedJournal.java 8 Feb 2007 21:32:09 -0000 1.7 --- TestMappedJournal.java 21 Feb 2007 20:17:20 -0000 1.8 *************** *** 92,95 **** --- 92,101 ---- suite.addTestSuite( TestRawStore.class ); + // test suite for MROW correctness. + suite.addTestSuite( TestMROW.class ); + + // test suite for BTree on the journal. + suite.addTestSuite( TestBTree.class ); + /* * Pickup the basic journal test suite. This is a proxied test suite, so *************** *** 134,137 **** --- 140,144 ---- assertTrue("isStable",bufferStrategy.isStable()); + assertFalse("isFullyBuffered",bufferStrategy.isFullyBuffered()); assertEquals(Options.FILE, properties.getProperty(Options.FILE), bufferStrategy.file.toString()); assertEquals(Options.INITIAL_EXTENT, Options.DEFAULT_INITIAL_EXTENT, *************** *** 181,183 **** --- 188,238 ---- } + /** + * Test suite integration for {@link AbstractMROWTestCase}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ + public static class TestMROW extends AbstractMROWTestCase { + + public TestMROW() { + super(); + } + + public TestMROW(String name) { + super(name); + } + + protected BufferMode getBufferMode() { + + return BufferMode.Mapped; + + } + + } + + /** + * Test suite integration for {@link AbstractBTreeWithJournalTestCase}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ + public static class TestBTree extends AbstractBTreeWithJournalTestCase { + + public TestBTree() { + super(); + } + + public TestBTree(String name) { + super(name); + } + + public BufferMode getBufferMode() { + + return BufferMode.Mapped; + + } + + } + } Index: TestTemporaryStore.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/TestTemporaryStore.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** TestTemporaryStore.java 15 Feb 2007 20:59:21 -0000 1.1 --- TestTemporaryStore.java 21 Feb 2007 20:17:20 -0000 1.2 *************** *** 57,61 **** /** ! * Test suite for {@link TemporaryStore}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> --- 57,61 ---- /** ! * Test suite for {@link TemporaryRawStore}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> *************** *** 79,83 **** protected IRawStore getStore() { ! return new TemporaryStore(); } --- 79,83 ---- protected IRawStore getStore() { ! return new TemporaryRawStore(); } *************** *** 90,94 **** public void test_overflow() { ! TemporaryStore store = (TemporaryStore) getStore(); AbstractBufferStrategy bufferStrategy = (AbstractBufferStrategy) store --- 90,94 ---- public void test_overflow() { ! TemporaryRawStore store = (TemporaryRawStore) getStore(); AbstractBufferStrategy bufferStrategy = (AbstractBufferStrategy) store *************** *** 126,130 **** public void test_writeNoExtend() { ! TemporaryStore store = (TemporaryStore) getStore(); AbstractBufferStrategy bufferStrategy = (AbstractBufferStrategy) store --- 126,130 ---- public void test_writeNoExtend() { ! TemporaryRawStore store = (TemporaryRawStore) getStore(); AbstractBufferStrategy bufferStrategy = (AbstractBufferStrategy) store *************** *** 167,171 **** public void test_writeWithExtend() { ! TemporaryStore store = (TemporaryStore) getStore(); AbstractBufferStrategy bufferStrategy = (AbstractBufferStrategy) store --- 167,171 ---- public void test_writeWithExtend() { ! TemporaryRawStore store = (TemporaryRawStore) getStore(); AbstractBufferStrategy bufferStrategy = (AbstractBufferStrategy) store *************** *** 206,210 **** assertEquals("userExtent",userExtent, bufferStrategy.getUserExtent()); ! assertEquals(b, bufferStrategy.read(addr, null)); /* --- 206,210 ---- assertEquals("userExtent",userExtent, bufferStrategy.getUserExtent()); ! assertEquals(b, bufferStrategy.read(addr)); /* *************** *** 231,238 **** // verify data written before we overflowed the buffer. ! assertEquals(b, bufferStrategy.read(addr, null)); // verify data written after we overflowed the buffer. ! assertEquals(b2, bufferStrategy.read(addr2, null)); store.close(); --- 231,238 ---- // verify data written before we overflowed the buffer. ! assertEquals(b, bufferStrategy.read(addr)); // verify data written after we overflowed the buffer. ! assertEquals(b2, bufferStrategy.read(addr2)); store.close(); *************** *** 257,261 **** * correctness. */ ! TemporaryStore store = new TemporaryStore(Bytes.kilobyte*10, Bytes.kilobyte * 100, false); --- 257,261 ---- * correctness. */ ! TemporaryRawStore store = new TemporaryRawStore(Bytes.kilobyte*10, Bytes.kilobyte * 100, false); *************** *** 347,351 **** // verify the data. ! assertEquals(b, store.read(addr, null)); } --- 347,351 ---- // verify the data. ! assertEquals(b, store.read(addr)); } *************** *** 383,390 **** // verify data written before we overflowed the buffer. ! assertEquals(b, store.read(addr, null)); // verify data written after we overflowed the buffer. ! assertEquals(b2, store.read(addr2, null)); // the name of the on-disk file. --- 383,390 ---- // verify data written before we overflowed the buffer. ! assertEquals(b, store.read(addr)); // verify data written after we overflowed the buffer. ! assertEquals(b2, store.read(addr2)); // the name of the on-disk file. Index: AbstractTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractTestCase.java,v retrieving revision 1.17 retrieving revision 1.18 diff -C2 -d -r1.17 -r1.18 *** AbstractTestCase.java 17 Feb 2007 21:34:12 -0000 1.17 --- AbstractTestCase.java 21 Feb 2007 20:17:20 -0000 1.18 *************** *** 316,320 **** * Buffer. */ ! public void assertEquals(byte[] expected, ByteBuffer actual ) { if( expected == null ) throw new IllegalArgumentException(); --- 316,320 ---- * Buffer. */ ! public static void assertEquals(byte[] expected, ByteBuffer actual ) { if( expected == null ) throw new IllegalArgumentException(); *************** *** 322,325 **** --- 322,333 ---- if( actual == null ) fail("actual is null"); + if( actual.hasArray() && actual.arrayOffset() == 0 ) { + + assertEquals(expected,actual.array()); + + return; + + } + /* Create a read-only view on the buffer so that we do not mess with * its position, mark, or limit. *************** *** 349,353 **** * random length and having random contents. */ ! public ByteBuffer getRandomData(IJournal journal) { final int nbytes = r.nextInt(1024) + 1; --- 357,361 ---- * random length and having random contents. */ ! public ByteBuffer getRandomData() { final int nbytes = r.nextInt(1024) + 1; Index: AbstractRestartSafeTestCase.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/journal/AbstractRestartSafeTestCase.java,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** AbstractRestartSafeTestCase.java 9 Feb 2007 16:13:17 -0000 1.3 --- AbstractRestartSafeTestCase.java 21 Feb 2007 20:17:20 -0000 1.4 *************** *** 119,123 **** // read the data back. ! ByteBuffer actual = store.read(addr1,null); assertEquals(expected,actual); --- 119,123 ---- // read the data back. ! ByteBuffer actual = store.read(addr1); assertEquals(expected,actual); *************** *** 149,153 **** */ try { ! actual = store.read(addr1,null); fail("Expecting: "+IllegalArgumentException.class); }catch(IllegalArgumentException ex) {... [truncated message content] |
From: Bryan T. <tho...@us...> - 2007-02-21 20:17:34
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/util In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/test/com/bigdata/util Modified Files: TestChecksumUtility.java Log Message: Further work supporting transactional isolation. Index: TestChecksumUtility.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/util/TestChecksumUtility.java,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** TestChecksumUtility.java 5 Feb 2007 18:17:46 -0000 1.1 --- TestChecksumUtility.java 21 Feb 2007 20:17:22 -0000 1.2 *************** *** 166,170 **** data.length)); ! ByteBuffer direct = ByteBuffer.allocateDirect(data.length); direct.put(data); assertEquals(expectedChecksum, chk.checksum(direct, 0, --- 166,170 ---- data.length)); ! ByteBuffer direct = ByteBuffer.allocate(data.length); direct.put(data); assertEquals(expectedChecksum, chk.checksum(direct, 0, *************** *** 190,194 **** data.length-10)); ! ByteBuffer direct = ByteBuffer.allocateDirect(data.length); direct.put(data); assertEquals(expectedChecksum, chk.checksum(direct, 20, --- 190,194 ---- data.length-10)); ! ByteBuffer direct = ByteBuffer.allocate(data.length); direct.put(data); assertEquals(expectedChecksum, chk.checksum(direct, 20, |
From: Bryan T. <tho...@us...> - 2007-02-21 20:17:34
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/test/com/bigdata/objndx Modified Files: TestAll.java Removed Files: TestBTreeWithJournal.java Log Message: Further work supporting transactional isolation. --- TestBTreeWithJournal.java DELETED --- Index: TestAll.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/objndx/TestAll.java,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -d -r1.29 -r1.30 *** TestAll.java 15 Feb 2007 01:34:23 -0000 1.29 --- TestAll.java 21 Feb 2007 20:17:22 -0000 1.30 *************** *** 104,109 **** // test copy-on-write scenarios. suite.addTestSuite( TestCopyOnWrite.class ); - // stress test using journal as the backing store. - suite.addTestSuite( TestBTreeWithJournal.class ); /* --- 104,107 ---- *************** *** 128,148 **** // test of the bloom filter integration. suite.addTestSuite( TestIndexSegmentWithBloomFilter.class ); ! // @todo test compacting merge of two index segments. suite.addTestSuite( TestIndexSegmentMerger.class ); /* - * use of btree to support transactional isolation. - * - * @todo verify that null is allowed to represent a delted value. - * - * @todo test of double-delete. - * - * @todo test as simple object store (persistent identifiers) by - * refactoring the journal test suites. - * - * @todo test on partitioned index. - */ - - /* * use of btree to support column store. * --- 126,133 ---- // test of the bloom filter integration. suite.addTestSuite( TestIndexSegmentWithBloomFilter.class ); ! // test compacting merge of two index segments. suite.addTestSuite( TestIndexSegmentMerger.class ); /* * use of btree to support column store. * |
From: Bryan T. <tho...@us...> - 2007-02-21 20:17:34
|
Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/util/concurrent In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/java/com/bigdata/util/concurrent Added Files: DaemonThreadFactory.java Log Message: Further work supporting transactional isolation. --- NEW FILE: DaemonThreadFactory.java --- package com.bigdata.util.concurrent; import java.util.concurrent.Executors; import java.util.concurrent.ThreadFactory; /** * A thread factory that configures the thread as a daemon thread. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> * @version $Id$ */ public class DaemonThreadFactory implements ThreadFactory { final private ThreadFactory delegate; private static ThreadFactory _default = new DaemonThreadFactory(); /** * Returns an instance based on {@link Executors#defaultThreadFactory()} * that configures the thread for daemon mode. */ final public static ThreadFactory defaultThreadFactory() { return _default; } /** * Uses {@link Executors#defaultThreadFactory()} as the delegate. */ public DaemonThreadFactory() { this( Executors.defaultThreadFactory() ); } /** * Uses the specified delegate {@link ThreadFactory}. * * @param delegate * The delegate thread factory that is responsible for * creating the threads. */ public DaemonThreadFactory(ThreadFactory delegate) { assert delegate != null; this.delegate = delegate; } public Thread newThread(Runnable r) { Thread t = delegate.newThread( r ); t.setDaemon(true); // System.err.println("new thread: "+t.getName()); return t; } } |
From: Bryan T. <tho...@us...> - 2007-02-21 20:17:34
|
Update of /cvsroot/cweb/bigdata/src/test/com/bigdata In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv5461/src/test/com/bigdata Modified Files: TestAll.java Log Message: Further work supporting transactional isolation. Index: TestAll.java =================================================================== RCS file: /cvsroot/cweb/bigdata/src/test/com/bigdata/TestAll.java,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** TestAll.java 13 Feb 2007 23:01:12 -0000 1.2 --- TestAll.java 21 Feb 2007 20:17:22 -0000 1.3 *************** *** 81,90 **** TestSuite suite = new TestSuite("bigdata"); - /* - * @todo refactor the btree tests that depend on the journal for testing - * scale into a different test suite so that we can test the atomic store - * and restart-safe features of the journal before getting into testing of - * the btrees integrated with the journal. - */ suite.addTest( com.bigdata.io.TestAll.suite() ); suite.addTest( com.bigdata.util.TestAll.suite() ); --- 81,84 ---- |