From: <tho...@us...> - 2011-12-20 17:30:36
|
Revision: 5816 http://bigdata.svn.sourceforge.net/bigdata/?rev=5816&view=rev Author: thompsonbry Date: 2011-12-20 17:30:25 +0000 (Tue, 20 Dec 2011) Log Message: ----------- Updated release notes. The 1.0.x branch actually has two versions of RWStore.properties in the bigdata-war module. The one in bigdata-war/src/resources is the one that winds up being installed in the WAR. I have therefore made it a copy of the one in bigdata-war/RWStore.properties. This issue was already resolved in the 1.1.x branch (TERMS_REFACTOR_BRANCH). Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-war/src/resources/RWStore.properties branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 13:49:43 UTC (rev 5815) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 17:30:25 UTC (rev 5816) @@ -1,4 +1,4 @@ -This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. +This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-war/src/resources/RWStore.properties =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-war/src/resources/RWStore.properties 2011-12-20 13:49:43 UTC (rev 5815) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-war/src/resources/RWStore.properties 2011-12-20 17:30:25 UTC (rev 5816) @@ -6,9 +6,7 @@ ## Journal options. ## -# The backing file. This contains all your data. You want to put this someplace -# safe. The default locator will wind up in the directory from which you start -# your servlet container. +# The backing file. com.bigdata.journal.AbstractJournal.file=bigdata.jnl # The persistence engine. Use 'Disk' for the WORM or 'DiskRW' for the RWStore. @@ -17,6 +15,9 @@ com.bigdata.btree.writeRetentionQueue.capacity=4000 com.bigdata.btree.BTree.branchingFactor=128 +# Bump up the branching factor for the statement indices on the default kb. +com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor=512 + # 200M initial extent. com.bigdata.journal.AbstractJournal.initialExtent=209715200 com.bigdata.journal.AbstractJournal.maximumExtent=209715200 Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 13:49:43 UTC (rev 5815) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 17:30:25 UTC (rev 5816) @@ -1,4 +1,4 @@ -This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. +This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2012-01-06 17:55:44
|
Revision: 5832 http://bigdata.svn.sourceforge.net/bigdata/?rev=5832&view=rev Author: martyncutcher Date: 2012-01-06 17:55:31 +0000 (Fri, 06 Jan 2012) Log Message: ----------- Update that fixes a number of issues. 1) Problem with RWStore session protection resulting in un-recycled storage 2) Problem with historical BTree index retention for recycled storage 3) Provide new Option to overwrite recycled storage as debugging aid Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/RWAddressManager.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rawstore/AbstractRawWormStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rawstore/IAddressManager.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rawstore/WormAddressManager.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/AllocBlock.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/FixedAllocator.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/sector/MemStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/btree/TestRawRecords.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/btree/TestRemoveAll.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/JournalDelegate.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/RWAddressManager.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rawstore/AbstractRawWormStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rawstore/IAddressManager.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rawstore/WormAddressManager.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/AllocBlock.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/FixedAllocator.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/sector/MemStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/btree/TestRawRecords.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/btree/TestRemoveAll.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java 2012-01-04 21:36:34 UTC (rev 5831) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/IndexSegmentStore.java 2012-01-06 17:55:31 UTC (rev 5832) @@ -1552,6 +1552,10 @@ return addressManager.getOffset(addr); } + final public long getPhysicalAddress(long addr) { + return addressManager.getPhysicalAddress(addr); + } + // final public void packAddr(DataOutput out, long addr) throws IOException { // addressManager.packAddr(out, addr); // } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-01-04 21:36:34 UTC (rev 5831) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-01-06 17:55:31 UTC (rev 5832) @@ -1,2211 +1,2232 @@ -/** - -Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. - -Contact: - SYSTAP, LLC - 4501 Tower Road - Greensboro, NC 27410 - lic...@bi... - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; version 2 of the License. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - */ -/* - * Created on Feb 10, 2010 - */ - -package com.bigdata.io.writecache; - -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.channels.FileChannel; -import java.util.Collection; -import java.util.Collections; -import java.util.Iterator; -import java.util.Map; -import java.util.Map.Entry; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ConcurrentMap; -import java.util.concurrent.ConcurrentSkipListMap; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; -import java.util.concurrent.atomic.AtomicLong; -import java.util.concurrent.atomic.AtomicReference; -import java.util.concurrent.locks.Lock; -import java.util.concurrent.locks.ReentrantReadWriteLock; - -import org.apache.log4j.Logger; - -import com.bigdata.btree.IndexSegmentBuilder; -import com.bigdata.counters.CAT; -import com.bigdata.counters.CounterSet; -import com.bigdata.counters.Instrument; -import com.bigdata.io.DirectBufferPool; -import com.bigdata.io.FileChannelUtility; -import com.bigdata.io.IBufferAccess; -import com.bigdata.io.IReopenChannel; -import com.bigdata.journal.AbstractBufferStrategy; -import com.bigdata.journal.StoreTypeEnum; -import com.bigdata.journal.WORMStrategy; -import com.bigdata.journal.ha.HAWriteMessage; -import com.bigdata.rawstore.Bytes; -import com.bigdata.rawstore.IRawStore; -import com.bigdata.rwstore.RWStore; -import com.bigdata.util.ChecksumError; -import com.bigdata.util.ChecksumUtility; -import com.bigdata.util.concurrent.Memoizer; - -/** - * This class provides a write cache with read-through for NIO writes on a - * {@link FileChannel} (and potentially on a remote service). This class is - * designed to maximize the opportunity for efficient NIO by combining many - * writes onto a single direct {@link ByteBuffer} and then efficiently - * transferring those writes onto the backing channel in a channel dependent - * manner. In general, there are three use cases for a {@link WriteCache}: - * <ol> - * <li>Gathered writes. This case is used by the {@link RWStore}.</li> - * <li>Pure append of sequentially allocated records. This case is used by the - * {@link WORMStrategy} (WORM) and by the {@link IndexSegmentBuilder}.</li> - * <li>Write of a single large buffer owned by the caller. This case may be used - * when the caller wants to manage the buffers or when the caller's buffer is - * larger than the write cache.</li> - * </ol> - * The caller is responsible for managing which buffers are being written on and - * read on, when they are flushed, and when they are reset. It is perfectly - * reasonable to have more than one {@link WriteCache} and to read through on - * any {@link WriteCache} until it has been recycled. A {@link WriteCache} must - * be reset before it is put into play again for new writes. - * <p> - * Note: For an append-only model (WORM), the caller MUST serialize writes onto - * the {@link IRawStore} and the {@link WriteCache}. This is required in order - * to ensure that the records are laid out in a dense linear fashion on the - * {@link WriteCache} and permits the backing buffer to be transferred in a - * single IO to the backing file. - * <p> - * Note: For a {@link RWStore}, the caller must take more responsibility for - * managing the {@link WriteCache}(s) which are in play and scheduling their - * eviction onto the backing store. The caller can track the space remaining in - * each {@link WriteCache} and decide when to flush a {@link WriteCache} based - * on that information. - * - * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ - */ -abstract public class WriteCache implements IWriteCache { - - protected static final Logger log = Logger.getLogger(WriteCache.class); - - /** - * <code>true</code> iff per-record checksums are being maintained. - */ - private final boolean useChecksum; - - /** - * <code>true</code> iff per-record checksums are being maintained. - */ - private final boolean prefixWrites; - - /** - * The buffer used to absorb writes that are destined for some channel. - * <p> - * Note: This is an {@link AtomicReference} since we want to clear this - * field in {@link #close()}. - */ - final private AtomicReference<IBufferAccess> buf; - - /** - * The read lock allows concurrent {@link #acquire()}s while the write lock - * prevents {@link #acquire()} during critical sections such as - * {@link #flush(boolean, long, TimeUnit)}, {@link #reset()}, and - * {@link #close()}. - */ - final private ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); - - /** - * Return the backing {@link ByteBuffer}. The caller may read or write on - * the buffer, but MUST NOT have a side effect on the - * {@link ByteBuffer#position()} without first synchronizing on the - * {@link ByteBuffer}. Once they are done, the caller MUST call - * {@link #release()}. - * <p> - * Note: This uses the read lock to allow concurrent read/write operations - * on the backing buffer. - * <p> - * Note: <strong>At most one write operation may execute concurrently in - * order to avoid side effects on the buffers position when copying data - * onto the buffer. This constraint must be imposed by the caller using a - * <code>synchronized(buf){}</code> block during the critical sections where - * the buffer position will be updated by a write. </strong> - * - * @return The {@link ByteBuffer}. - * - * @throws InterruptedException - * @throws IllegalStateException - * if the {@link WriteCache} is closed. - */ - private ByteBuffer acquire() throws InterruptedException, IllegalStateException { - - final Lock readLock = lock.readLock(); - - readLock.lockInterruptibly(); - - try { - - // latch.inc(); - - final IBufferAccess tmp = buf.get(); - - if (tmp == null) { - - // latch.dec(); - - throw new IllegalStateException(); - - } - - // Note: The ReadLock is still held! - return tmp.buffer(); - - } catch (Throwable t) { - - // Release the lock only on the error path. - readLock.unlock(); - - if (t instanceof InterruptedException) - throw (InterruptedException) t; - - if (t instanceof IllegalStateException) - throw (IllegalStateException) t; - - throw new RuntimeException(t); - - } - - } - - /** - * Release the read lock on an acquired {@link ByteBuffer}. - */ - private void release() { - - lock.readLock().unlock(); - - // latch.dec(); - - } - - /** - * Return a read-only view of the backing {@link ByteBuffer}. - * - * @return The read-only view -or- <code>null</code> if the - * {@link WriteCache} has been closed. - */ - ByteBuffer peek() { - - final ByteBuffer b = buf.get().buffer(); - - return b == null ? null : b.asReadOnlyBuffer(); - - } - - // /** - // * Return the buffer. No other thread will have access to the buffer. No - // * latch is established and there is no protocol for releasing the buffer - // * back. Instead, the buffer will become available again if the caller - // * releases the write lock. - // * - // * @throws IllegalMonitorStateException - // * unless the caller is holding the write lock. - // * @throws IllegalStateException - // * if the buffer reference has been cleared. - // */ - // protected ByteBuffer getExclusiveBuffer() { - // - // if (!lock.writeLock().isHeldByCurrentThread()) - // throw new IllegalMonitorStateException(); - // - // final ByteBuffer tmp = buf.get(); - // - // if (tmp == null) - // throw new IllegalStateException(); - // - // return tmp; - // - // } - - /** - * The metadata associated with a record in the {@link WriteCache}. - */ - public static class RecordMetadata { - - /** - * The offset of the record in the file. The offset may be relative to a - * base offset known to the writeOnChannel() implementation. - */ - public final long fileOffset; - - /** - * The offset within the {@link WriteCache}'s backing {@link ByteBuffer} - * of the start of the record. - */ - public final int bufferOffset; - - /** - * The length of the record in bytes as it will be written on the - * channel. If checksums are being written, then the length of the - * record has already been incorporated into this value. - */ - public final int recordLength; - - public RecordMetadata(final long fileOffset, final int bufferOffset, final int recordLength) { - - this.fileOffset = fileOffset; - - this.bufferOffset = bufferOffset; - - this.recordLength = recordLength; - - } - - public String toString() { - - return getClass().getSimpleName() + "{fileOffset=" + fileOffset + ",off=" + bufferOffset + ",len=" - + recordLength + "}"; - - } - - } - - /** - * An index into the write cache used for read through on the cache. The - * keys are the file offsets that would be used to read the corresponding - * record. The values describe the position in buffer where that record is - * found and the length of the record. - */ - final private ConcurrentMap<Long, RecordMetadata> recordMap; - - /** - * The offset of the first record written onto the {@link WriteCache}. This - * information is used when {@link #appendOnly} is <code>true</code> as it - * gives the starting offset at which the entire {@link ByteBuffer} may be - * written in a single IO. When {@link #appendOnly} is <code>false</code> - * this is basically meaningless. This is initialized to <code>-1L</code> as - * a clear indicator that there is no valid record written yet onto the - * cache. - */ - final private AtomicLong firstOffset = new AtomicLong(-1L); - - /** - * The capacity of the backing buffer. - */ - final private int capacity; - - /** - * When <code>true</code> {@link #close()} will release the - * {@link ByteBuffer} back to the {@link DirectBufferPool}. - */ - final private boolean releaseBuffer; - - /** - * A private instance used to compute the checksum of all data in the - * current {@link #buf}. This is enabled for the high availability write - * replication pipeline. The checksum over the entire {@link #buf} is - * necessary in this context to ensure that the receiver can verify the - * contents of the {@link #buf}. The per-record checksums CAN NOT be used - * for this purpose since large records may be broken across - */ - final private ChecksumHelper checker; - - /** - * The then current extent of the backing file as of the last record written - * onto the cache before it was written onto the write replication pipeline. - * The receiver is responsible for adjusting its local file size to match. - * - * @see WriteCacheService#setExtent(long) - */ - private final AtomicLong fileExtent = new AtomicLong(); - - /** - * Create a {@link WriteCache} from either a caller supplied buffer or a - * direct {@link ByteBuffer} allocated from the {@link DirectBufferPool}. - * <p> - * Note: The application MUST ensure that it {@link #close()}s the - * {@link WriteCache} or it can leak direct {@link ByteBuffer}s! - * <p> - * Note: NIO operations are performed using a direct {@link ByteBuffer} - * (that is, one use backing bytes are allocated on the C heap). When the - * caller supplies a {@link ByteBuffer} that is allocated on the Java heap - * as opposed to in native memory, a temporary direct {@link ByteBuffer} - * will be allocated for the IO operation by Java. The JVM can fail to - * release this temporary direct {@link ByteBuffer}, resulting in a memory - * leak. For this reason, the {@link WriteCache} SHOULD use a direct - * {@link ByteBuffer}. - * - * @see http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=8f - * ab76d1d4479fffffffffa5abfb09c719a30?bug_id=6210541 - * - * @param buf - * A {@link ByteBuffer} to be used as the write cache (optional). - * When <code>null</code> a buffer will be allocated for you from - * the {@link DirectBufferPool}. Buffers allocated on your behalf - * will be automatically released by {@link #close()}. - * @param scatteredWrites - * <code>true</code> iff the implementation uses scattered - * writes. The RW store uses scattered writes since its updates - * are written to different parts of the backing file. The WORM - * store does not since all updates are written to the end of the - * user extent in the backing file. - * @param useChecksum - * <code>true</code> iff the write cache will store the caller's - * checksum for a record and validate it on read. - * @param isHighlyAvailable - * when <code>true</code> the whole record checksum is maintained - * for use when replicating the write cache to along the write - * pipeline. - * @param bufferHasData - * when <code>true</code> the caller asserts that the buffer has - * data (from a replicated write), in which case the position - * should be the start of the data in the buffer and the limit - * the #of bytes with valid data. when <code>false</code>, the - * caller's buffer will be cleared. The code presumes that the - * {@link WriteCache} instance will be used to lay down a single - * buffer worth of data onto the backing file. - * - * @throws InterruptedException - */ - public WriteCache(IBufferAccess buf, final boolean scatteredWrites, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData) throws InterruptedException { - - if (bufferHasData && buf == null) - throw new IllegalArgumentException(); - - if (buf == null) { - - buf = DirectBufferPool.INSTANCE.acquire(); - - this.releaseBuffer = true; - - } else { - - this.releaseBuffer = false; - - } - - // if (quorumManager == null) - // throw new IllegalArgumentException(); - - // this.quorumManager = quorumManager; - - this.useChecksum = useChecksum; - this.prefixWrites = scatteredWrites; - - if (isHighlyAvailable && !bufferHasData) { - // Note: No checker if buffer has data. - checker = new ChecksumHelper(); - } else { - checker = null; - } - - // save reference to the write cache. - this.buf = new AtomicReference<IBufferAccess>(buf); - - // the capacity of the buffer in bytes. - this.capacity = buf.buffer().capacity(); - - /* - * Discard anything in the buffer, resetting the position to zero, the - * mark to zero, and the limit to the capacity. - */ - if (!bufferHasData) { - buf.buffer().clear(); - } - - /* - * An estimate of the #of records that might fit within the write cache. - * This is based on an assumption that the "average" record is 1k. This - * is used solely to assign the initial capacity of this map. - */ - final int indexDefaultCapacity = capacity / (1 * Bytes.kilobyte32); - - /* - * allocate and initialize the write cache index. - * - * For scattered writes we choose to use a sorted map so that we can - * easily flush writes to the file channel in order. This may not be - * important depending on the caching strategy of the underlying system - * but it cannot be a bad thing. - * - * If we do not need to support scattered writes then we have the option - * to use the ConcurrentHashMap which has the advantage of constant - * access time for read through support. - * - * TODO: some literature indicates the ConcurrentSkipListMap scales - * better with concurrency, so we should benchmark this option for - * non-scattered writes as well. - */ - if (scatteredWrites) { - recordMap = new ConcurrentSkipListMap<Long, RecordMetadata>(); - } else { - recordMap = new ConcurrentHashMap<Long, RecordMetadata>(indexDefaultCapacity); - } - - if (bufferHasData) { - /* - * Populate the record map from the record. - */ - resetRecordMapFromBuffer(); - } - - } - - /** - * Adds some debugging information. - */ - public String toString() { - - return super.toString()// - + "{recordCount=" + recordMap.size()// - + ",firstOffset=" + firstOffset// - + ",releaseBuffer=" + releaseBuffer// - + ",bytesWritten=" + bytesWritten()// - + ",bytesRemaining=" + remaining()// - + "}"; - - } - - /** - * The offset of the first record written onto the {@link WriteCache}. This - * information is used when {@link #appendOnly} is <code>true</code> as it - * gives the starting offset at which the entire {@link ByteBuffer} may be - * written in a single IO. When {@link #appendOnly} is <code>false</code> - * this is basically meaningless. - * <p> - * Note: This has been raised into the - * {@link #writeOnChannel(ByteBuffer, long, Map, long)} method signature. It - * has been reduced to a package private method so it will remain visible to - * the unit tests, otherwise it could become private. - * - * @return The first offset written into the {@link WriteCache} since it was - * last {@link #reset()} and <code>-1L</code> if nothing has been - * written since the {@link WriteCache} was created or was last - * {@link #reset()}. - */ - final long getFirstOffset() { - - return firstOffset.get(); - - } - - /** - * The maximum length of a record which could be inserted into the buffer. - * <p> - * Note: When checksums are enabled, this is 4 bytes less than the actual - * capacity of the underlying buffer since each record requires an - * additional four bytes for the checksum field. - */ - final public int capacity() { - - return capacity - (useChecksum ? 4 : 0) - (prefixWrites ? 12 : 0); - - } - - /** - * Return the #of bytes remaining in the buffer. - * <p> - * Note: in order to rely on this value the caller MUST have exclusive - * access to the buffer. This API does not provide the means for acquiring - * that exclusive access. This is something that the caller has to arrange - * for themselves, which is why this is a package private method. - */ - final int remaining() { - - final int remaining = capacity - buf.get().buffer().position(); - - return remaining; - - } - - /** - * The #of bytes written on the backing buffer. - */ - public final int bytesWritten() { - - return buf.get().buffer().position(); - - } - - /** - * Return <code>true</code> if there are no records buffered on the cache. - * - * @todo This currently tests the {@link #recordMap}. In fact, for at least - * the {@link RWStore} the record map COULD be empty with cleared - * writes on the backing {@link ByteBuffer}. Therefore this tests - * whether the {@link WriteCache} has data to be written but does not - * clearly report whether or not some data has been written onto the - * buffer (and hence it has fewer bytes remaining than might otherwise - * be expected). - */ - final public boolean isEmpty() { - - return recordMap.isEmpty(); - - } - - /** - * Set the current extent of the backing file on the {@link WriteCache} - * object. When used as part of an HA write pipeline, the receiver is - * responsible for adjusting its local file size to match the file extent in - * each {@link WriteCache} message. - * - * @param fileExtent - * The current extent of the file. - * - * @throws IllegalArgumentException - * if the file extent is negative. - * - * @see WriteCacheService#setExtent(long) - */ - public void setFileExtent(final long fileExtent) { - - if (fileExtent < 0L) - throw new IllegalArgumentException(); - - this.fileExtent.set(fileExtent); - - } - - public long getFileExtent() { - - return fileExtent.get(); - - } - - /** - * Return the checksum of all data written into the backing buffer for this - * {@link WriteCache} instance since it was last {@link #reset()}. - * - * @return The running checksum of the data written into the backing buffer. - * - * @throws UnsupportedOperationException - * if the {@link WriteCache} is not maintaining this checksum - * (i.e., if <code>isHighlyAvailable := false</code> was - * specified to the constructor). - */ - public int getWholeBufferChecksum() { - - if (checker == null) - throw new UnsupportedOperationException(); - - return checker.getChecksum(); - - } - - /** - * {@inheritDoc} - * - * @throws IllegalStateException - * If the buffer is closed. - * @throws IllegalArgumentException - * If the caller's record is larger than the maximum capacity of - * cache (the record could not fit within the cache). The caller - * should check for this and provide special handling for such - * large records. For example, they can be written directly onto - * the backing channel. - */ - public boolean write(final long offset, final ByteBuffer data, final int chk) throws InterruptedException { - - return write(offset, data, chk, true/* writeChecksum */); - - } - - /** - * - * @param offset - * @param data - * @param chk - * @param writeChecksum - * The checksum is appended to the record IFF this argument is - * <code>true</code> and checksums are in use. - * @return - * @throws InterruptedException - */ - boolean write(final long offset, final ByteBuffer data, final int chk, boolean writeChecksum) - throws InterruptedException { - - // Note: The offset MAY be zero. This allows for stores without any - // header block. - - if (m_written) { // should be clean, NO WAY should this be written to! - log.error("Writing to CLEAN cache: " + hashCode()); - throw new IllegalStateException("Writing to CLEAN cache: " + hashCode()); - } - - if (data == null) - throw new IllegalArgumentException(AbstractBufferStrategy.ERR_BUFFER_NULL); - - final WriteCacheCounters counters = this.counters.get(); - - final ByteBuffer tmp = acquire(); - - try { - - final int remaining = data.remaining(); - - // The #of bytes to transfer into the write cache. - final int datalen = remaining + (writeChecksum && useChecksum ? 4 : 0); - final int nwrite = datalen + (prefixWrites ? 12 : 0); - - if (nwrite > capacity) { - // This is more bytes than the total capacity of the buffer. - throw new IllegalArgumentException(AbstractBufferStrategy.ERR_BUFFER_OVERRUN); - - } - - if (remaining == 0) - throw new IllegalArgumentException(AbstractBufferStrategy.ERR_BUFFER_EMPTY); - - /* - * Note: We need to be synchronized on the ByteBuffer here since - * this operation relies on the position() being stable. - * - * Note: Also see clearAddrMap(long) which is synchronized on the - * acquired ByteBuffer in the same manner to protect it during - * critical sections which have a side effect on the buffer - * position. - */ - final int pos; - synchronized (tmp) { - - // the position() at which the record is cached in the buffer. - final int spos = tmp.position(); - - if (spos + nwrite > capacity) { - - /* - * There is not enough room left in the write cache for this - * record. - */ - - return false; - - } - - // add prefix data if required and set data position in buffer - if (prefixWrites) { - tmp.putLong(offset); - tmp.putInt(datalen); - pos = spos + 12; - } else { - pos = spos; - } - - tmp.put(data); - - // copy the record into the cache, updating position() as we go. - // TODO: Note that the checker must be invalidated if a RWCache - // "deletes" an entry - // by zeroing an address. - if (checker != null) { - // update the checksum (no side-effects on [data]) - ByteBuffer chkBuf = tmp.asReadOnlyBuffer(); - chkBuf.position(spos); - chkBuf.limit(tmp.position()); - checker.update(chkBuf); - } - - // write checksum - if any - if (writeChecksum && useChecksum) { - tmp.putInt(chk); - if (checker != null) { - // update the running checksum to include this too. - checker.update(chk); - } - } - - // set while synchronized since no contention. - firstOffset.compareAndSet(-1L/* expect */, offset/* update */); - - // update counters while holding the lock. - counters.naccept++; - counters.bytesAccepted += nwrite; - - } // synchronized(tmp) - - /* - * Add metadata for the record so it can be read back from the - * cache. - */ - if (recordMap.put(Long.valueOf(offset), new RecordMetadata(offset, pos, datalen)) != null) { - /* - * Note: This exception indicates that the abort protocol did - * not reset() the current write cache before new writes were - * laid down onto the buffer. - */ - throw new AssertionError("Record exists for offset in cache: offset=" + offset); - } - - if (log.isTraceEnabled()) { // @todo rather than hashCode() set a - // buffer# on each WriteCache instance. - log.trace("offset=" + offset + ", pos=" + pos + ", nwrite=" + nwrite + ", writeChecksum=" - + writeChecksum + ", useChecksum=" + useChecksum + ", nrecords=" + recordMap.size() - + ", hashCode=" + hashCode()); - } - - return true; - - } finally { - - release(); - - } - - } - - /** - * {@inheritDoc} - * - * @throws IllegalStateException - * If the buffer is closed. - */ - public ByteBuffer read(final long offset) throws InterruptedException, ChecksumError { - - final WriteCacheCounters counters = this.counters.get(); - - final ByteBuffer tmp = acquire(); - - try { - - // Look up the metadata for that record in the cache. - final RecordMetadata md; - if ((md = recordMap.get(offset)) == null) { - - // The record is not in this write cache. - counters.nmiss.increment(); - - return null; - } - - // length of the record w/o checksum field. - final int reclen = md.recordLength - (useChecksum ? 4 : 0); - - // the start of the record in writeCache. - final int pos = md.bufferOffset; - - // create a view with same offset, limit and position. - final ByteBuffer view = tmp.duplicate(); - - // adjust the view to just the record of interest. - view.limit(pos + reclen); - view.position(pos); - - // System.out.println("WriteCache, addr: " + offset + ", from: " + - // pos + ", " + md.recordLength + ", thread: " + - // Thread.currentThread().getId()); - /* - * Copy the data into a newly allocated buffer. This is necessary - * because our hold on the backing ByteBuffer for the WriteCache is - * only momentary. As soon as we release() the buffer the data in - * the buffer could be changed. - */ - - final byte[] b = new byte[reclen]; - - final ByteBuffer dst = ByteBuffer.wrap(b); - - // copy the data into [dst] (and the backing byte[]). - dst.put(view); - - // flip buffer for reading. - dst.flip(); - - if (useChecksum) { - - final int chk = tmp.getInt(pos + reclen); - - if (chk != ChecksumUtility.threadChk.get().checksum(b, 0/* offset */, reclen)) { - - // Note: [offset] is a (possibly relative) file offset. - throw new ChecksumError(checkdata()); - - } - - } - - counters.nhit.increment(); - - if (log.isTraceEnabled()) { - log.trace(show(dst, "read bytes")); - } - - return dst; - - } finally { - - release(); - - } - - } - - /** - * Dump some metadata and leading bytes from the buffer onto a - * {@link String}. - * - * @param buf - * The buffer. - * @param prefix - * A prefix for the dump. - * - * @return The {@link String}. - */ - private String show(final ByteBuffer buf, final String prefix) { - final StringBuffer str = new StringBuffer(); - int tpos = buf.position(); - if (tpos == 0) { - tpos = buf.limit(); - } - str.append(prefix + ", length: " + tpos + " : "); - for (int tb = 0; tb < tpos && tb < 20; tb++) { - str.append(Integer.toString(buf.get(tb)) + ","); - } - // log.trace(str.toString()); - return str.toString(); - } - - // private String show(final byte[] buf, int len, final String prefix) { - // final StringBuffer str = new StringBuffer(); - // str.append(prefix + ": "); - // int tpos = len; - // str.append(prefix + ", length: " + tpos + " : "); - // for (int tb = 0; tb < tpos && tb < 20; tb++) { - // str.append(Integer.toString(buf[tb]) + ","); - // } - // // log.trace(str.toString()); - // return str.toString(); - // } - - /** - * Flush the writes to the backing channel but DOES NOT sync the channel and - * DOES NOT {@link #reset()} the {@link WriteCache}. {@link #reset()} is a - * separate operation because a common use is to retain recently flushed - * instances for read-back. - * - * @param force - * When <code>true</code>, the data will be forced to stable - * media. - * - * @throws IOException - * @throws InterruptedException - */ - public void flush(final boolean force) throws IOException, InterruptedException { - - try { - - if (!flush(force, Long.MAX_VALUE, TimeUnit.NANOSECONDS)) { - - throw new RuntimeException(); - - } - - } catch (TimeoutException e) { - - throw new RuntimeException(e); - - } - - } - - /** - * Flush the writes to the backing channel but DOES NOT sync the channel and - * DOES NOT {@link #reset()} the {@link WriteCache}. {@link #reset()} is a - * separate operation because a common use is to retain recently flushed - * instances for read-back. - * - * @param force - * When <code>true</code>, the data will be forced to stable - * media. - * - * @throws IOException - * @throws TimeoutException - * @throws InterruptedException - */ - public boolean flush(final boolean force, final long timeout, final TimeUnit unit) throws IOException, - TimeoutException, InterruptedException { - - // start time - final long begin = System.nanoTime(); - - // total nanoseconds to wait. - final long nanos = unit.toNanos(timeout); - - // remaining nanoseconds to wait. - long remaining = nanos; - - final WriteCacheCounters counters = this.counters.get(); - - final Lock writeLock = lock.writeLock(); - - if (!writeLock.tryLock(remaining, TimeUnit.NANOSECONDS)) { - - return false; - - } - - try { - - final ByteBuffer tmp = this.buf.get().buffer(); - - if (tmp == null) - throw new IllegalStateException(); - - // #of bytes to write on the disk. - final int nbytes = tmp.position(); - - if (log.isTraceEnabled()) { - log.trace("nbytes=" + nbytes + ", firstOffset=" + getFirstOffset() + ", nflush=" + counters.nflush); - } - - if (nbytes == 0) { - - // NOP. - return true; - - } - - /* - * Create a view with same offset, limit and position. - * - * Note: The writeOnChannel method is given the view. This prevents - * it from adjusting the position() on the backing buffer. - */ - { - - final ByteBuffer view = tmp.duplicate(); - - // adjust the view to just the dirty record. - view.limit(nbytes); - view.position(0); - - // remaining := (total - elapsed). - remaining = nanos - (System.nanoTime() - begin); - - // write the data on the disk file. - final boolean ret = writeOnChannel(view, getFirstOffset(), Collections.unmodifiableMap(recordMap), - remaining); - - if (!ret) { - throw new TimeoutException("Unable to flush WriteCache"); - } - - counters.nflush++; - - return ret; - - } - - } finally { - - writeLock.unlock(); - - } - - } - - /** - * Debug routine logs @ ERROR additional information when a checksum error - * has been encountered. - * - * @return An informative error message. - * - * @throws InterruptedException - * @throws IllegalStateException - */ - private String checkdata() throws IllegalStateException, InterruptedException { - - if (!useChecksum) { - return "Unable to check since checksums are not enabled"; - } - - ByteBuffer tmp = acquire(); - try { - int nerrors = 0; - int nrecords = recordMap.size(); - - for (Entry<Long, RecordMetadata> ent : recordMap.entrySet()) { - - final RecordMetadata md = ent.getValue(); - - // length of the record w/o checksum field. - final int reclen = md.recordLength - 4; - - // the start of the record in writeCache. - final int pos = md.bufferOffset; - - final int chk = tmp.getInt(pos + reclen); - - // create a view with same offset, limit and position. - final ByteBuffer view = tmp.duplicate(); - - // adjust the view to just the record of interest. - view.limit(pos + reclen); - view.position(pos); - - final byte[] b = new byte[reclen]; - - final ByteBuffer dst = ByteBuffer.wrap(b); - - // copy the data into [dst] (and the backing byte[]). - dst.put(view); - if (chk != ChecksumUtility.threadChk.get().checksum(b, 0/* offset */, reclen)) { - log.error("Bad data for address: " + ent.getKey()); - nerrors++; - } - - } - return "WriteCache checkdata - records: " + nrecords + ", errors: " + nerrors; - } finally { - release(); - } - } - - /** - * Write the data from the buffer onto the channel. This method provides a - * uniform means to request that the buffer write itself onto the backing - * channel, regardless of whether the channel is backed by a file, a socket, - * etc. - * <p> - * Implementations of this method MAY support gathered writes, depending on - * the channel. The necessary information to perform a gathered write is - * present in the <i>recordMap</i>. On the other hand, the implementation - * MAY require that the records in the cache are laid out for a WORM, in - * which case {@link #getFirstOffset()} provides the starting offset for the - * data to be written. The application MUST coordinate the requirements for - * a R/W or WORM store with the use of the {@link WriteCache} and the means - * to write on the backing channel. - * - * @param buf - * The data to be written. Only the dirty bytes are visible in - * this view. The implementation should write all bytes from the - * current position to the limit. - * @param firstOffset - * The offset of the first record in the recordMap into the file - * (may be relative to a base offset within the file). This is - * provided as an optimization for the WORM which writes its - * records contiguously on the backing store. - * @param recordMap - * The mapping of record offsets onto metadata about those - * records. - * @param nanos - * The timeout for the operation in nanoseconds. - * - * @return <code>true</code> if the operation was completed successfully - * within the time alloted. - * - * @throws InterruptedException - * if the thread was interrupted. - * @throws IOException - * if there was an IO problem. - */ - abstract protected boolean writeOnChannel(final ByteBuffer buf, final long firstOffset, - final Map<Long, RecordMetadata> recordMap, final long nanos) throws InterruptedException, TimeoutException, - IOException; - - /** - * {@inheritDoc}. - * <p> - * This implementation clears the buffer, the record map, and other internal - * metadata such that the {@link WriteCache} is prepared to receive new - * writes. - * - * @throws IllegalStateException - * if the write cache is closed. - */ - public void reset() throws InterruptedException { - - final Lock writeLock = lock.writeLock(); - - writeLock.lockInterruptibly(); - - try { - - // // wait until there are no readers using the buffer. - // latch.await(); - - final ByteBuffer tmp = buf.get().buffer(); - - if (tmp == null) { - - // Already closed. - throw new IllegalStateException(); - - } - - // reset all state. - _resetState(tmp); - - } finally { - - writeLock.unlock(); - - } - - } - - /** - * Permanently take the {@link WriteCache} instance out of service. If the - * buffer was allocated by the {@link WriteCache} then it is released back - * to the {@link DirectBufferPool}. After this method is called, records can - * no longer be read from nor written onto the {@link WriteCache}. It is - * safe to invoke this method more than once. - * <p> - * Concurrent {@link #read(long, int)} requests will be serviced if the - * already hold the the read lock but requests will fail once the - * - * @throws InterruptedException - */ - public void close() throws InterruptedException { - - final Lock writeLock = lock.writeLock(); - - writeLock.lockInterruptibly(); - - try { - - // // wait until there are no readers using the buffer. - // latch.await(); - - /* - * Note: This method is thread safe. Only one thread will manage to - * clear the AtomicReference and it will do the rest of the work as - * well. - */ - - // position := 0; limit := capacity. - final IBufferAccess tmp = buf.get(); - - if (tmp == null) { - - // Already closed. - return; - - } - - if (buf.compareAndSet(tmp/* expected */, null/* update */)) { - - try { - - _resetState(tmp.buffer()); - - } finally { - - if (releaseBuffer) { - - tmp.release(); - - } - - } - - } - - } finally { - - writeLock.unlock(); - - } - - } - - /** - * Reset the internal state of the {@link WriteCache} in preparation to - * reuse it to receive more writes. - * <p> - * Note: Keep private unless strong need for override since you can not call - * this method without holding the write lock - * - * @param tmp - */ - private void _resetState(final ByteBuffer tmp) { - - if (tmp == null) - throw new IllegalArgumentException(); - - // clear the index since all records were flushed to disk. - recordMap.clear(); - - // clear to well known invalid offset. - firstOffset.set(-1L); - - // position := 0; limit := capacity. - tmp.clear(); - - if (checker != null) { - - // reset the running checksum of the data written onto the backing - // buffer. - checker.reset(); - - } - - // Martyn: I moved your debug flag here so it is always cleared by - // reset(). - m_written = false; - - } - - /** - * Return the RMI message object that will accompany the payload from the - * {@link WriteCache} when it is replicated along the write pipeline. - * - * @return cache A {@link WriteCache} to be replicated. - */ - public HAWriteMessage newHAWriteMessage(final long quorumToken) { - - return new HAWriteMessage(bytesWritten(), getWholeBufferChecksum(), - prefixWrites ? StoreTypeEnum.RW : StoreTypeEnum.WORM, - quorumToken, fileExtent.get(), firstOffset.get()); - - } - - /** - * The current performance counters. - */ - protected final AtomicReference<WriteCacheCounters> counters = new AtomicReference<WriteCacheCounters>( - new WriteCacheCounters()); - - /** - * Sets the performance counters to be used by the write cache. A service - * should do this if you want to aggregate the performance counters across - * multiple {@link WriteCache} instances. - * - * @param newVal - * The shared performance counters. - * - * @throws IllegalArgumentException - * if the argument is <code>null</code>. - */ - void setCounters(final WriteCacheCounters newVal) { - - if (newVal == null) - return; - - this.counters.set(newVal); - - } - - /** - * Return the performance counters for the write cacher. - */ - public CounterSet getCounters() { - - return counters.get().getCounters(); - - } - - /** - * Performance counters for the {@link WriteCache}. - * <p> - * Note: thread-safety is required for: {@link #nhit} and {@link #nmiss}. - * The rest should be Ok without additional synchronization, CAS operators, - * etc (mainly because they are updated while holding a lock). - * - * @author <a href="mailto:tho...@us...">Bryan - * Thompson</a> - */ - public static class WriteCacheCounters { - - /* - * read on the cache. - */ - - /** - * #of read requests that are satisfied by the write cache. - */ - public final CAT nhit = new CAT(); - - /** - * The #of read requests that are not satisfied by the write cache. - */ - public final CAT nmiss = new CAT(); - - /* - * write on the cache. - */ - - /** - * #of records accepted for eventual write onto the backing channel. - */ - public long naccept; - - /** - * #of bytes accepted for eventual write onto the backing channel. - */ - public long bytesAccepted; - - /* - * write on the channel. - */ - - /** - * #of times {@link IWriteCache#flush(boolean)} was called. - */ - public long nflush; - - /** - * #of writes on the backing channel. Note that some write cache - * implementations do ordered writes and will therefore do one write per - * record while others do append only and therefore do one write per - * write cache flush. Note that in both cases we may have to redo a - * write if the backing channel was concurrently closed, so the value - * here can diverge from the #of accepted records and the #of requested - * flushes. - */ - public long nwrite; - - /** - * #of bytes written onto the backing channel. - */ - public long bytesWritten; - - /** - * Total elapsed time writing onto the backing channel. - */ - public long elapsedWriteNanos; - - public CounterSet getCounters() { - - final CounterSet root = new CounterSet(); - - /* - * read on the cache. - */ - - root.addCounter("nhit", new Instrument<Long>() { - public void sample() { - setValue(nhit.get()); - } - }); - - root.addCounter("nmiss", new Instrument<Long>() { - public void sample() { - setValue(nmiss.get()); - } - }); - - root.addCounter("hitRate", new Instrument<Double>() { - public void sample() { - final long nhit = WriteCacheCounters.this.nhit.get(); - final long ntests = nhit + WriteCacheCounters.this.nmiss.get(); - setValue(ntests == 0L ? 0d : (double) nhit / ntests); - } - }); - - /* - * write on the cache. - */ - - // #of records accepted by the write cache. - root.addCounter("naccept", new Instrument<Long>() { - public void sample() { - setValue(naccept); - } - }); - - // #of bytes in records accepted by the write cache. - root.addCounter("bytesAccepted", new Instrument<Long>() { - public void sample() { - setValue(bytesAccepted); - } - }); - - /* - * write on the channel. - */ - - // #of times the write cache was flushed to the backing channel. - root.addCounter("nflush", new Instrument<Long>() { - public void sample() { - setValue(nflush); - } - }); - - // #of writes onto the backing channel. - root.addCounter("nwrite", new Instrument<Long>() { - public void sample() { - setValue(nwrite); - } - }); - - // #of bytes written onto the backing channel. - root.addCounter("bytesWritten", new Instrument<Long>() { - public void sample() { - setValue(bytesWritten); - } - }); - - // average bytes per write (will under report if we must retry - // writes). - root.addCounter("bytesPerWrite", new Instrument<Double>() { - public void sample() { - final double bytesPerWrite = (nwrite == 0 ? 0d : (bytesWritten / (double) nwrite)); - setValue(bytesPerWrite); - } - }); - - // elapsed time writing on the backing channel. - root.addCounter("writeSecs", new Instrument<Double>() { - public void sample() { - setValue(elapsedWriteNanos / 1000000000.); - } - }); - - return root; - - } // getCounters() - - public String toString() { - - return getCounters().toString(); - - } - - } // class WriteCacheCounters - - /** - * A {@link WriteCache} implementation suitable for an append-only file such - * as the {@link WORMStrategy} or the output file of the - * {@link IndexSegmentBuilder}. - * - * @author <a href="mailto:tho...@us...">Bryan - * Thompson</a> - */ - public static class FileChannelWriteCache extends WriteCache { - - /** - * An offset which will be applied to each record written onto the - * backing {@link FileChannel}. The offset is generally the size of the - * root blocks for a journal or the checkpoint record for an index - * segment. It can be zero if you do not have anything at the head of - * the file. - * <p> - * Note: This implies that writing the root blocks is done separately in - * the protocol since you can't write below this offset otherwise. - */ - final protected long baseOffset; - - /** - * Used to re-open the {@link FileChannel} in this class. - */ - public final IReopenChannel<FileChannel> opener; - - /** - * @param baseOffset - * An offset - * @param buf - * @param opener - * - * @throws InterruptedException - */ - public FileChannelWriteCache(final long baseOffset, final IBufferAccess buf, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData, final IReopenChannel<FileChannel> opener) - throws InterruptedException { - - super(buf, false/* scatteredWrites */, useChecksum, isHighlyAvailable, bufferHasData); - - if (baseOffset < 0) - throw new IllegalArgumentException(); - - if (opener == null) - throw new IllegalArgumentException(); - - this.baseOffset = baseOffset; - - this.opener = opener; - - } - - @Override - protected boolean writeOnChannel(final ByteBuffer data, final long firstOffset, - final Map<Long, RecordMetadata> recordMap, final long nanos) throws InterruptedException, IOException { - - final long begin = System.nanoTime(); - - final int nbytes = data.remaining(); - - /* - * The position in the file at which the record will be written. - */ - final long pos = baseOffset + firstOffset; - - /* - * Write bytes in [data] from position to limit onto the channel. - * - * @todo This ignores the timeout. - */ - final int nwrites = FileChannelUtility.writeAll(opener, data, pos); - - final WriteCacheCounters counters = this.counters.get(); - counters.nwrite += nwrites; - counters.bytesWritten += nbytes; - counters.elapsedWriteNanos += (System.nanoTime() - begin); - - return true; - - } - - } - - /** - * The scattered write cache is used by the {@link RWStore} since the writes - * can be made to any part of the file assigned for data allocation. - * <p> - * The writeonChannel must therefore utilize the {@link RecordMetadata} to - * write each update separately. - * <p> - * To support HA, we prefix each write with the file position and buffer - * length in the cache. This enables the cache buffer to be sent as a single - * stream and the RecordMap rebuilt downstream. - * - * FIXME Once the file system cache fills up the throughput is much lower - * for the RW mode. Look into putting a thread pool to work on the scattered - * writes. This could be part of a refactor to apply a thread pool to IOs - * and related to prefetch and {@link Memoizer} behaviors. - * - * FIXME To maximize IO rates we should attempt to elide/merge contiguous - * writes. To do this can double-buffer in writeOnChannel. This also - * provides an opportunity to write the full slot size of the RWStore that - * may have advantages, particularly for an SSD, since it may avoid a - * pre-write read to populate the write sector. - */ - public static class FileChannelScatteredWriteCache extends WriteCache { - - /** - * Used to re-open the {@link FileChannel} in this class. - */ - private final IReopenChannel<FileChannel> opener; - - private final BufferedWrite m_bufferedWrite; - /** - * @param baseOffset - * An offset - * @param buf - * @param opener - * - * @throws InterruptedException - */ - public FileChannelScatteredWriteCache(final IBufferAccess buf, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData, final IReopenChannel<FileChannel> opener, - final BufferedWrite bufferedWrite) - throws InterruptedException { - - super(buf, true/* scatteredWrites */, useChecksum, isHighlyAvailable, bufferHasData); - - if (opener == null) - throw new IllegalArgumentException(); - - this.opener = opener; - - m_bufferedWrite = bufferedWrite; - - } - - /** - * Called by WriteCacheService to process a direct write for large - * blocks and also to flush data from dirty caches. - */ - protected boolean writeOnChannel(final ByteBuffer data, - final long firstOffsetIgnored, - final Map<Long, RecordMetadata> recordMap, final long nanos) - throws InterruptedException, IOException { - - final long begin = System.nanoTime(); - - final int nbytes = data.remaining(); - - if (m_written) { - log.warn("DUPLICATE writeOnChannel for : " + this.hashCode()); - } else { - assert !this.isEmpty(); - - m_written = true; - } - - /* - * Retrieve the sorted write iterator and write each block to the - * file. - * - * If there is a BufferedWrite then ensure it is reset. - */ - if (m_bufferedWrite != null) { - m_bufferedWrite.reset(); - } - - int nwrites = 0; - final Iterator<Entry<Long, RecordMetadata>> entries = recordMap.entrySet().iterator(); - while (entries.hasNext()) { - - final Entry<Long, RecordMetadata> entry = entries.next(); - - final RecordMetadata md = entry.getValue(); - - // create a view on record of interest. - final ByteBuffer view = data.duplicate(); - final int pos = md.bufferOffset; - view.limit(pos + md.recordLength); - view.position(pos); - - final long offset = entry.getKey(); // offset in file to update - if (m_bufferedWrite == null) { - nwrites += FileChannelUtility.writeAll(opener, view, offset); - } else { - nwrites += m_bufferedWrite.write(offset, view, opener); - } - // if (log.isInfoEnabled()) - // log.info("writing to: " + offset); - registerWriteStatus(offset, md.recordLength, 'W'); - } - - if (m_bufferedWrite != null) { - nwrites += m_bufferedWrite.flush(opener); - - if (log.isTraceEnabled()) - log.trace(m_bufferedWrite.getStats(null, true)); - } - - final WriteCacheCounters counters = this.counters.get(); - counters.nwrite += nwrites; - counters.bytesWritten += nbytes; - counters.elapsedWriteNanos += (System.nanoTime() - begin); - - return true; - - } - - /** - * Hook to rebuild RecordMetadata after buffer has been transferred. For - * the ScatteredWriteCache this means hopping trough the buffer marking - * offsets and data size into the RecordMetadata map, and ignoring any - * zero address entries that indicate a "freed" allocation. - * - * Update: This has now been changed to avoid problems with incremental checksums by - * indicating removal by appending a new prefix where the data length is zero. - * - * @throws InterruptedException - */ - public void resetRecordMapFromBuffer(final ByteBuffer buf, final Map<Long, RecordMetadata> recordMap) { - recordMap.clear(); - final int sp = buf.position(); - - int pos = 0; - buf.limit(sp); - while (pos < buf.limit()) { - buf.position(pos); - long addr = buf.getLong(); - if (addr == 0L) { // end of content - break; - } - int sze = buf.getInt(); - if (sze == 0 /* deleted */) { - recordMap.remove(addr); // should only happen if previous write already made to the buffer - // but the allocation has since been freed - } else { - recordMap.put(addr, new RecordMetadata(addr, pos + 12, sze)); - } - pos += 12 + sze; // hop over buffer info (addr + sze) and then - // data - } - } - - } - - /** - * To support deletion we will remove any entries for the provided address. - * This is just to yank something out of the cache which was created and - * then immediately deleted on the RW store before it could be written - * through to the disk. This does not reclaim any space in the write cache - * since allocations are strictly sequential within the cache and can only - * be used with the RW store. The RW store uses write prefixes in the cache - * buffer so we must zero the long address element as well to indicate that - * the record was removed from the buffer. - * - * This approach has now been refined to avoid problems with incremental - * checksums which otherwise would invalidate the buffer checksum to date. - * Rather than zeroing the address of the deleted block a new zero-length - * prefix is written that when processed will ensure any current recordMap - * entry is removed. - * - * TODO: An issue to be worked through is whether there remains a problem - * with a full buffer where there is not room for the dummy "remove" prefix. - * Whilst we could of course ensure that a buffer with less than the space - * required for prefixWrites should be moved immediately to the dirtlyList, - * there would still exist the possibility that the clear could be - * requested on a buffer already on the dirtyList. It looks like this should - * not matter, since each buffer update can be considered as an atomic - * update even if the set of writes are individually not atomic (the updates - * from a previous buffer will always have been completed before the next - * buffer is processed). - * - * In that case it appears we could ignore the situation where there is no - * room for the dummy "remove" prefix, since there will be no room for a new - * write also and the buffer will be flushed either on commit or a - * subsequent write. - * - * A problem previously existed with unsynchronized access to the ByteBuffer. - * Resulting in a conflict over the position() and buffer corruption. - * - * @param addr - * The address of a cache entry. - * - * @throws InterruptedException - * @throws IllegalStateException - */ - /*public*/ void clearAddrMap(final long addr) throws IllegalStateException, InterruptedException { - final RecordMetadata entry = recordMap.remove(addr); - if (prefixWrites) { - // final int pos ... [truncated message content] |
From: <tho...@us...> - 2012-01-20 16:20:04
|
Revision: 5839 http://bigdata.svn.sourceforge.net/bigdata/?rev=5839&view=rev Author: thompsonbry Date: 2012-01-20 16:19:51 +0000 (Fri, 20 Jan 2012) Log Message: ----------- Journal/RWStore fixes related to [1,2,3]. - Journal: - txLog - newTx() - BTree: Do not delete old checkpoint, root addr, or bloom filter in writeCheckpoint2(). - AbstractBTree: Do not delete old bloom filter. - RootBlockCommitter: final attributes only. - PseudoRandom: take new method => 1.1.x; add entire class to 1.0.x. - AbstractJournal - txLog - some comments - brought across removeCommitRecordEntries() and assertHistoricalIndexCacheClean() methods, but they are commented out. Rollback WriteCacheService facility for overwriting deletes: - RWWriteCacheService: rolled back. - WriteCacheService: rolled back. - WriteCache: rolled back. - AllocBlock: removed call to RWWriteCacheService#overwrite(). - AbstractTransactionService#getEarliestReleaseTime() => remove. - RWStore - txLog - OVERWRITE_DELETE => remove. - isSessionProtected(): now tests for allocation lock. [1] http://sourceforge.net/apps/trac/bigdata/ticket/440 (BTree can not be cast to Name2Addr when accessing historical state on RWStore) [2] http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) [3] http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/RootBlockCommitter.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/AllocBlock.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/io/writecache/WriteCacheService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/RootBlockCommitter.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/AllocBlock.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWWriteCacheService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/util/PseudoRandom.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/util/PseudoRandom.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 13:25:43 UTC (rev 5838) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 16:19:51 UTC (rev 5839) @@ -2026,11 +2026,20 @@ * maximum #of index entries for which the bloom filter will * have an acceptable error rate. */ + /* + * TODO The code to recycle the old checkpoint addr, the old + * root addr, and the old bloom filter has been disabled in + * writeCheckpoint2 and AbstractBTree#insert pending the + * resolution of ticket #440. This is being done to minimize + * the likelyhood that the underlying bug for that ticket + * can be tripped by the code. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 + */ +// final long curAddr = filter.disable(); +// if (curAddr != IRawStore.NULL) +// store.delete(curAddr); - final long curAddr = filter.disable(); - if (curAddr != IRawStore.NULL) - store.delete(curAddr); - log.warn("Bloom filter disabled - maximum error rate would be exceeded" + ": entryCount=" + getEntryCount() Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-20 13:25:43 UTC (rev 5838) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-20 16:19:51 UTC (rev 5839) @@ -933,14 +933,23 @@ * The bloom filter is enabled, is loaded and is dirty, so write * it on the store now. */ + /* + * TODO The code to recycle the old checkpoint addr, the old + * root addr, and the old bloom filter has been disabled in + * writeCheckpoint2 and AbstractBTree#insert pending the + * resolution of ticket #440. This is being done to minimize + * the likelyhood that the underlying bug for that ticket + * can be tripped by the code. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 + */ +// final long oldAddr = filter.getAddr(); +// if (oldAddr != IRawStore.NULL) { +// this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); +// +// store.delete(oldAddr); +// } - final long oldAddr = filter.getAddr(); - if (oldAddr != IRawStore.NULL) { - this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); - - store.delete(oldAddr); - } - filter.write(store); } @@ -960,20 +969,30 @@ } - // delete old checkpoint data - final long oldAddr = checkpoint != null ? checkpoint.addrCheckpoint : IRawStore.NULL; - if (oldAddr != IRawStore.NULL) { - this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); - store.delete(oldAddr); - } + /* + * TODO The code to recycle the old checkpoint addr, the old + * root addr, and the old bloom filter has been disabled in + * writeCheckpoint2 and AbstractBTree#insert pending the + * resolution of ticket #440. This is being done to minimize + * the likelyhood that the underlying bug for that ticket + * can be tripped by the code. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 + */ +// // delete old checkpoint data +// final long oldAddr = checkpoint != null ? checkpoint.addrCheckpoint : IRawStore.NULL; +// if (oldAddr != IRawStore.NULL) { +// this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); +// store.delete(oldAddr); +// } +// +// // delete old root data if changed +// final long oldRootAddr = checkpoint != null ? checkpoint.getRootAddr() : IRawStore.NULL; +// if (oldRootAddr != IRawStore.NULL && oldRootAddr != root.identity) { +// this.getBtreeCounters().bytesReleased += store.getByteCount(oldRootAddr); +// store.delete(oldRootAddr); +// } - // delete old root data if changed - final long oldRootAddr = checkpoint != null ? checkpoint.getRootAddr() : IRawStore.NULL; - if (oldRootAddr != IRawStore.NULL && oldRootAddr != root.identity) { - this.getBtreeCounters().bytesReleased += store.getByteCount(oldRootAddr); - store.delete(oldRootAddr); - } - // create new checkpoint record. checkpoint = metadata.newCheckpoint(this); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-01-20 13:25:43 UTC (rev 5838) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/io/writecache/WriteCache.java 2012-01-20 16:19:51 UTC (rev 5839) @@ -1,2232 +1,2211 @@ -/** - -Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. - -Contact: - SYSTAP, LLC - 4501 Tower Road - Greensboro, NC 27410 - lic...@bi... - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; version 2 of the License. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - */ -/* - * Created on Feb 10, 2010 - */ - -package com.bigdata.io.writecache; - -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.channels.FileChannel; -import java.util.Collection; -import java.util.Collections; -import java.util.Iterator; -import java.util.Map; -import java.util.Map.Entry; -import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.ConcurrentMap; -import java.util.concurrent.ConcurrentSkipListMap; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; -import java.util.concurrent.atomic.AtomicLong; -import java.util.concurrent.atomic.AtomicReference; -import java.util.concurrent.locks.Lock; -import java.util.concurrent.locks.ReentrantReadWriteLock; - -import org.apache.log4j.Logger; - -import com.bigdata.btree.IndexSegmentBuilder; -import com.bigdata.counters.CAT; -import com.bigdata.counters.CounterSet; -import com.bigdata.counters.Instrument; -import com.bigdata.io.DirectBufferPool; -import com.bigdata.io.FileChannelUtility; -import com.bigdata.io.IBufferAccess; -import com.bigdata.io.IReopenChannel; -import com.bigdata.journal.AbstractBufferStrategy; -import com.bigdata.journal.StoreTypeEnum; -import com.bigdata.journal.WORMStrategy; -import com.bigdata.journal.ha.HAWriteMessage; -import com.bigdata.rawstore.Bytes; -import com.bigdata.rawstore.IRawStore; -import com.bigdata.rwstore.RWStore; -import com.bigdata.util.ChecksumError; -import com.bigdata.util.ChecksumUtility; -import com.bigdata.util.concurrent.Memoizer; - -/** - * This class provides a write cache with read-through for NIO writes on a - * {@link FileChannel} (and potentially on a remote service). This class is - * designed to maximize the opportunity for efficient NIO by combining many - * writes onto a single direct {@link ByteBuffer} and then efficiently - * transferring those writes onto the backing channel in a channel dependent - * manner. In general, there are three use cases for a {@link WriteCache}: - * <ol> - * <li>Gathered writes. This case is used by the {@link RWStore}.</li> - * <li>Pure append of sequentially allocated records. This case is used by the - * {@link WORMStrategy} (WORM) and by the {@link IndexSegmentBuilder}.</li> - * <li>Write of a single large buffer owned by the caller. This case may be used - * when the caller wants to manage the buffers or when the caller's buffer is - * larger than the write cache.</li> - * </ol> - * The caller is responsible for managing which buffers are being written on and - * read on, when they are flushed, and when they are reset. It is perfectly - * reasonable to have more than one {@link WriteCache} and to read through on - * any {@link WriteCache} until it has been recycled. A {@link WriteCache} must - * be reset before it is put into play again for new writes. - * <p> - * Note: For an append-only model (WORM), the caller MUST serialize writes onto - * the {@link IRawStore} and the {@link WriteCache}. This is required in order - * to ensure that the records are laid out in a dense linear fashion on the - * {@link WriteCache} and permits the backing buffer to be transferred in a - * single IO to the backing file. - * <p> - * Note: For a {@link RWStore}, the caller must take more responsibility for - * managing the {@link WriteCache}(s) which are in play and scheduling their - * eviction onto the backing store. The caller can track the space remaining in - * each {@link WriteCache} and decide when to flush a {@link WriteCache} based - * on that information. - * - * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ - */ -abstract public class WriteCache implements IWriteCache { - - protected static final Logger log = Logger.getLogger(WriteCache.class); - - /** - * <code>true</code> iff per-record checksums are being maintained. - */ - private final boolean useChecksum; - - /** - * <code>true</code> iff per-record checksums are being maintained. - */ - private final boolean prefixWrites; - - /** - * The buffer used to absorb writes that are destined for some channel. - * <p> - * Note: This is an {@link AtomicReference} since we want to clear this - * field in {@link #close()}. - */ - final private AtomicReference<IBufferAccess> buf; - - /** - * The read lock allows concurrent {@link #acquire()}s while the write lock - * prevents {@link #acquire()} during critical sections such as - * {@link #flush(boolean, long, TimeUnit)}, {@link #reset()}, and - * {@link #close()}. - */ - final private ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); - - /** - * Return the backing {@link ByteBuffer}. The caller may read or write on - * the buffer, but MUST NOT have a side effect on the - * {@link ByteBuffer#position()} without first synchronizing on the - * {@link ByteBuffer}. Once they are done, the caller MUST call - * {@link #release()}. - * <p> - * Note: This uses the read lock to allow concurrent read/write operations - * on the backing buffer. - * <p> - * Note: <strong>At most one write operation may execute concurrently in - * order to avoid side effects on the buffers position when copying data - * onto the buffer. This constraint must be imposed by the caller using a - * <code>synchronized(buf){}</code> block during the critical sections where - * the buffer position will be updated by a write. </strong> - * - * @return The {@link ByteBuffer}. - * - * @throws InterruptedException - * @throws IllegalStateException - * if the {@link WriteCache} is closed. - */ - private ByteBuffer acquire() throws InterruptedException, IllegalStateException { - - final Lock readLock = lock.readLock(); - - readLock.lockInterruptibly(); - - try { - - // latch.inc(); - - final IBufferAccess tmp = buf.get(); - - if (tmp == null) { - - // latch.dec(); - - throw new IllegalStateException(); - - } - - // Note: The ReadLock is still held! - return tmp.buffer(); - - } catch (Throwable t) { - - // Release the lock only on the error path. - readLock.unlock(); - - if (t instanceof InterruptedException) - throw (InterruptedException) t; - - if (t instanceof IllegalStateException) - throw (IllegalStateException) t; - - throw new RuntimeException(t); - - } - - } - - /** - * Release the read lock on an acquired {@link ByteBuffer}. - */ - private void release() { - - lock.readLock().unlock(); - - // latch.dec(); - - } - - /** - * Return a read-only view of the backing {@link ByteBuffer}. - * - * @return The read-only view -or- <code>null</code> if the - * {@link WriteCache} has been closed. - */ - ByteBuffer peek() { - - final ByteBuffer b = buf.get().buffer(); - - return b == null ? null : b.asReadOnlyBuffer(); - - } - - // /** - // * Return the buffer. No other thread will have access to the buffer. No - // * latch is established and there is no protocol for releasing the buffer - // * back. Instead, the buffer will become available again if the caller - // * releases the write lock. - // * - // * @throws IllegalMonitorStateException - // * unless the caller is holding the write lock. - // * @throws IllegalStateException - // * if the buffer reference has been cleared. - // */ - // protected ByteBuffer getExclusiveBuffer() { - // - // if (!lock.writeLock().isHeldByCurrentThread()) - // throw new IllegalMonitorStateException(); - // - // final ByteBuffer tmp = buf.get(); - // - // if (tmp == null) - // throw new IllegalStateException(); - // - // return tmp; - // - // } - - /** - * The metadata associated with a record in the {@link WriteCache}. - */ - public static class RecordMetadata { - - /** - * The offset of the record in the file. The offset may be relative to a - * base offset known to the writeOnChannel() implementation. - */ - public final long fileOffset; - - /** - * The offset within the {@link WriteCache}'s backing {@link ByteBuffer} - * of the start of the record. - */ - public final int bufferOffset; - - /** - * The length of the record in bytes as it will be written on the - * channel. If checksums are being written, then the length of the - * record has already been incorporated into this value. - */ - public final int recordLength; - - /** - * Indicates whether this record is an overwrite record - */ - private final boolean overwrite; - - public RecordMetadata(final long fileOffset, final int bufferOffset, final int recordLength, final boolean overwrite) { - - this.fileOffset = fileOffset; - - this.bufferOffset = bufferOffset; - - this.recordLength = recordLength; - - this.overwrite = overwrite; - - } - - public String toString() { - - return getClass().getSimpleName() + "{fileOffset=" + fileOffset + ",off=" + bufferOffset + ",len=" - + recordLength + "}"; - - } - - public boolean isOverwrite() { - return overwrite; - } - - } - - /** - * An index into the write cache used for read through on the cache. The - * keys are the file offsets that would be used to read the corresponding - * record. The values describe the position in buffer where that record is - * found and the length of the record. - */ - final private ConcurrentMap<Long, RecordMetadata> recordMap; - - /** - * The offset of the first record written onto the {@link WriteCache}. This - * information is used when {@link #appendOnly} is <code>true</code> as it - * gives the starting offset at which the entire {@link ByteBuffer} may be - * written in a single IO. When {@link #appendOnly} is <code>false</code> - * this is basically meaningless. This is initialized to <code>-1L</code> as - * a clear indicator that there is no valid record written yet onto the - * cache. - */ - final private AtomicLong firstOffset = new AtomicLong(-1L); - - /** - * The capacity of the backing buffer. - */ - final private int capacity; - - /** - * When <code>true</code> {@link #close()} will release the - * {@link ByteBuffer} back to the {@link DirectBufferPool}. - */ - final private boolean releaseBuffer; - - /** - * A private instance used to compute the checksum of all data in the - * current {@link #buf}. This is enabled for the high availability write - * replication pipeline. The checksum over the entire {@link #buf} is - * necessary in this context to ensure that the receiver can verify the - * contents of the {@link #buf}. The per-record checksums CAN NOT be used - * for this purpose since large records may be broken across - */ - final private ChecksumHelper checker; - - /** - * The then current extent of the backing file as of the last record written - * onto the cache before it was written onto the write replication pipeline. - * The receiver is responsible for adjusting its local file size to match. - * - * @see WriteCacheService#setExtent(long) - */ - private final AtomicLong fileExtent = new AtomicLong(); - - /** - * Create a {@link WriteCache} from either a caller supplied buffer or a - * direct {@link ByteBuffer} allocated from the {@link DirectBufferPool}. - * <p> - * Note: The application MUST ensure that it {@link #close()}s the - * {@link WriteCache} or it can leak direct {@link ByteBuffer}s! - * <p> - * Note: NIO operations are performed using a direct {@link ByteBuffer} - * (that is, one use backing bytes are allocated on the C heap). When the - * caller supplies a {@link ByteBuffer} that is allocated on the Java heap - * as opposed to in native memory, a temporary direct {@link ByteBuffer} - * will be allocated for the IO operation by Java. The JVM can fail to - * release this temporary direct {@link ByteBuffer}, resulting in a memory - * leak. For this reason, the {@link WriteCache} SHOULD use a direct - * {@link ByteBuffer}. - * - * @see http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=8f - * ab76d1d4479fffffffffa5abfb09c719a30?bug_id=6210541 - * - * @param buf - * A {@link ByteBuffer} to be used as the write cache (optional). - * When <code>null</code> a buffer will be allocated for you from - * the {@link DirectBufferPool}. Buffers allocated on your behalf - * will be automatically released by {@link #close()}. - * @param scatteredWrites - * <code>true</code> iff the implementation uses scattered - * writes. The RW store uses scattered writes since its updates - * are written to different parts of the backing file. The WORM - * store does not since all updates are written to the end of the - * user extent in the backing file. - * @param useChecksum - * <code>true</code> iff the write cache will store the caller's - * checksum for a record and validate it on read. - * @param isHighlyAvailable - * when <code>true</code> the whole record checksum is maintained - * for use when replicating the write cache to along the write - * pipeline. - * @param bufferHasData - * when <code>true</code> the caller asserts that the buffer has - * data (from a replicated write), in which case the position - * should be the start of the data in the buffer and the limit - * the #of bytes with valid data. when <code>false</code>, the - * caller's buffer will be cleared. The code presumes that the - * {@link WriteCache} instance will be used to lay down a single - * buffer worth of data onto the backing file. - * - * @throws InterruptedException - */ - public WriteCache(IBufferAccess buf, final boolean scatteredWrites, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData) throws InterruptedException { - - if (bufferHasData && buf == null) - throw new IllegalArgumentException(); - - if (buf == null) { - - buf = DirectBufferPool.INSTANCE.acquire(); - - this.releaseBuffer = true; - - } else { - - this.releaseBuffer = false; - - } - - // if (quorumManager == null) - // throw new IllegalArgumentException(); - - // this.quorumManager = quorumManager; - - this.useChecksum = useChecksum; - this.prefixWrites = scatteredWrites; - - if (isHighlyAvailable && !bufferHasData) { - // Note: No checker if buffer has data. - checker = new ChecksumHelper(); - } else { - checker = null; - } - - // save reference to the write cache. - this.buf = new AtomicReference<IBufferAccess>(buf); - - // the capacity of the buffer in bytes. - this.capacity = buf.buffer().capacity(); - - /* - * Discard anything in the buffer, resetting the position to zero, the - * mark to zero, and the limit to the capacity. - */ - if (!bufferHasData) { - buf.buffer().clear(); - } - - /* - * An estimate of the #of records that might fit within the write cache. - * This is based on an assumption that the "average" record is 1k. This - * is used solely to assign the initial capacity of this map. - */ - final int indexDefaultCapacity = capacity / (1 * Bytes.kilobyte32); - - /* - * allocate and initialize the write cache index. - * - * For scattered writes we choose to use a sorted map so that we can - * easily flush writes to the file channel in order. This may not be - * important depending on the caching strategy of the underlying system - * but it cannot be a bad thing. - * - * If we do not need to support scattered writes then we have the option - * to use the ConcurrentHashMap which has the advantage of constant - * access time for read through support. - * - * TODO: some literature indicates the ConcurrentSkipListMap scales - * better with concurrency, so we should benchmark this option for - * non-scattered writes as well. - */ - if (scatteredWrites) { - recordMap = new ConcurrentSkipListMap<Long, RecordMetadata>(); - } else { - recordMap = new ConcurrentHashMap<Long, RecordMetadata>(indexDefaultCapacity); - } - - if (bufferHasData) { - /* - * Populate the record map from the record. - */ - resetRecordMapFromBuffer(); - } - - } - - /** - * Adds some debugging information. - */ - public String toString() { - - return super.toString()// - + "{recordCount=" + recordMap.size()// - + ",firstOffset=" + firstOffset// - + ",releaseBuffer=" + releaseBuffer// - + ",bytesWritten=" + bytesWritten()// - + ",bytesRemaining=" + remaining()// - + "}"; - - } - - /** - * The offset of the first record written onto the {@link WriteCache}. This - * information is used when {@link #appendOnly} is <code>true</code> as it - * gives the starting offset at which the entire {@link ByteBuffer} may be - * written in a single IO. When {@link #appendOnly} is <code>false</code> - * this is basically meaningless. - * <p> - * Note: This has been raised into the - * {@link #writeOnChannel(ByteBuffer, long, Map, long)} method signature. It - * has been reduced to a package private method so it will remain visible to - * the unit tests, otherwise it could become private. - * - * @return The first offset written into the {@link WriteCache} since it was - * last {@link #reset()} and <code>-1L</code> if nothing has been - * written since the {@link WriteCache} was created or was last - * {@link #reset()}. - */ - final long getFirstOffset() { - - return firstOffset.get(); - - } - - /** - * The maximum length of a record which could be inserted into the buffer. - * <p> - * Note: When checksums are enabled, this is 4 bytes less than the actual - * capacity of the underlying buffer since each record requires an - * additional four bytes for the checksum field. - */ - final public int capacity() { - - return capacity - (useChecksum ? 4 : 0) - (prefixWrites ? 12 : 0); - - } - - /** - * Return the #of bytes remaining in the buffer. - * <p> - * Note: in order to rely on this value the caller MUST have exclusive - * access to the buffer. This API does not provide the means for acquiring - * that exclusive access. This is something that the caller has to arrange - * for themselves, which is why this is a package private method. - */ - final int remaining() { - - final int remaining = capacity - buf.get().buffer().position(); - - return remaining; - - } - - /** - * The #of bytes written on the backing buffer. - */ - public final int bytesWritten() { - - return buf.get().buffer().position(); - - } - - /** - * Return <code>true</code> if there are no records buffered on the cache. - * - * @todo This currently tests the {@link #recordMap}. In fact, for at least - * the {@link RWStore} the record map COULD be empty with cleared - * writes on the backing {@link ByteBuffer}. Therefore this tests - * whether the {@link WriteCache} has data to be written but does not - * clearly report whether or not some data has been written onto the - * buffer (and hence it has fewer bytes remaining than might otherwise - * be expected). - */ - final public boolean isEmpty() { - - return recordMap.isEmpty(); - - } - - /** - * Set the current extent of the backing file on the {@link WriteCache} - * object. When used as part of an HA write pipeline, the receiver is - * responsible for adjusting its local file size to match the file extent in - * each {@link WriteCache} message. - * - * @param fileExtent - * The current extent of the file. - * - * @throws IllegalArgumentException - * if the file extent is negative. - * - * @see WriteCacheService#setExtent(long) - */ - public void setFileExtent(final long fileExtent) { - - if (fileExtent < 0L) - throw new IllegalArgumentException(); - - this.fileExtent.set(fileExtent); - - } - - public long getFileExtent() { - - return fileExtent.get(); - - } - - /** - * Return the checksum of all data written into the backing buffer for this - * {@link WriteCache} instance since it was last {@link #reset()}. - * - * @return The running checksum of the data written into the backing buffer. - * - * @throws UnsupportedOperationException - * if the {@link WriteCache} is not maintaining this checksum - * (i.e., if <code>isHighlyAvailable := false</code> was - * specified to the constructor). - */ - public int getWholeBufferChecksum() { - - if (checker == null) - throw new UnsupportedOperationException(); - - return checker.getChecksum(); - - } - - /** - * {@inheritDoc} - * - * @throws IllegalStateException - * If the buffer is closed. - * @throws IllegalArgumentException - * If the caller's record is larger than the maximum capacity of - * cache (the record could not fit within the cache). The caller - * should check for this and provide special handling for such - * large records. For example, they can be written directly onto - * the backing channel. - */ - public boolean write(final long offset, final ByteBuffer data, final int chk) throws InterruptedException { - - return write(offset, data, chk, true/* writeChecksum */); - - } - - public boolean write(final long offset, final ByteBuffer data, final int chk, boolean writeChecksum) - throws InterruptedException { - return write(offset, data, chk, writeChecksum, false/* overwrite */); - } - - /** - * - * @param offset - * @param data - * @param chk - * @param writeChecksum - * The checksum is appended to the record IFF this argument is - * <code>true</code> and checksums are in use. - * @return - * @throws InterruptedException - */ - boolean write(final long offset, final ByteBuffer data, final int chk, boolean writeChecksum, final boolean overwrite) - throws InterruptedException { - - // Note: The offset MAY be zero. This allows for stores without any - // header block. - - if (m_written) { // should be clean, NO WAY should this be written to! - log.error("Writing to CLEAN cache: " + hashCode()); - throw new IllegalStateException("Writing to CLEAN cache: " + hashCode()); - } - - if (data == null) - throw new IllegalArgumentException(AbstractBufferStrategy.ERR_BUFFER_NULL); - - final WriteCacheCounters counters = this.counters.get(); - - final ByteBuffer tmp = acquire(); - - try { - - final int remaining = data.remaining(); - - // The #of bytes to transfer into the write cache. - final int datalen = remaining + (writeChecksum && useChecksum ? 4 : 0); - final int nwrite = datalen + (prefixWrites ? 12 : 0); - - if (nwrite > capacity) { - // This is more bytes than the total capacity of the buffer. - throw new IllegalArgumentException(AbstractBufferStrategy.ERR_BUFFER_OVERRUN); - - } - - if (remaining == 0) - throw new IllegalArgumentException(AbstractBufferStrategy.ERR_BUFFER_EMPTY); - - /* - * Note: We need to be synchronized on the ByteBuffer here since - * this operation relies on the position() being stable. - * - * Note: Also see clearAddrMap(long) which is synchronized on the - * acquired ByteBuffer in the same manner to protect it during - * critical sections which have a side effect on the buffer - * position. - */ - final int pos; - synchronized (tmp) { - - // the position() at which the record is cached in the buffer. - final int spos = tmp.position(); - - if (spos + nwrite > capacity) { - - /* - * There is not enough room left in the write cache for this - * record. - */ - - return false; - - } - - // add prefix data if required and set data position in buffer - if (prefixWrites) { - tmp.putLong(offset); - tmp.putInt(datalen); - pos = spos + 12; - } else { - pos = spos; - } - - tmp.put(data); - - // copy the record into the cache, updating position() as we go. - // TODO: Note that the checker must be invalidated if a RWCache - // "deletes" an entry - // by zeroing an address. - if (checker != null) { - // update the checksum (no side-effects on [data]) - ByteBuffer chkBuf = tmp.asReadOnlyBuffer(); - chkBuf.position(spos); - chkBuf.limit(tmp.position()); - checker.update(chkBuf); - } - - // write checksum - if any - if (writeChecksum && useChecksum) { - tmp.putInt(chk); - if (checker != null) { - // update the running checksum to include this too. - checker.update(chk); - } - } - - // set while synchronized since no contention. - firstOffset.compareAndSet(-1L/* expect */, offset/* update */); - - // update counters while holding the lock. - counters.naccept++; - counters.bytesAccepted += nwrite; - - } // synchronized(tmp) - - /* - * Add metadata for the record so it can be read back from the - * cache. - */ - final RecordMetadata xentry = recordMap.put(Long.valueOf(offset), new RecordMetadata(offset, pos, datalen, overwrite)); - if (xentry != null && !xentry.isOverwrite()) { - /* - * Note: This exception indicates that the abort protocol did - * not reset() the current write cache before new writes were - * laid down onto the buffer. - */ - throw new AssertionError("Record exists for offset in cache: offset=" + offset); - } - - if (log.isTraceEnabled()) { // @todo rather than hashCode() set a - // buffer# on each WriteCache instance. - log.trace("offset=" + offset + ", pos=" + pos + ", nwrite=" + nwrite + ", writeChecksum=" - + writeChecksum + ", useChecksum=" + useChecksum + ", nrecords=" + recordMap.size() - + ", hashCode=" + hashCode()); - } - - return true; - - } finally { - - release(); - - } - - } - - /** - * {@inheritDoc} - * - * @throws IllegalStateException - * If the buffer is closed. - */ - public ByteBuffer read(final long offset) throws InterruptedException, ChecksumError { - - final WriteCacheCounters counters = this.counters.get(); - - final ByteBuffer tmp = acquire(); - - try { - - // Look up the metadata for that record in the cache. - final RecordMetadata md; - if ((md = recordMap.get(offset)) == null) { - - // The record is not in this write cache. - counters.nmiss.increment(); - - return null; - } - - // length of the record w/o checksum field. - final int reclen = md.recordLength - (useChecksum ? 4 : 0); - - // the start of the record in writeCache. - final int pos = md.bufferOffset; - - // create a view with same offset, limit and position. - final ByteBuffer view = tmp.duplicate(); - - // adjust the view to just the record of interest. - view.limit(pos + reclen); - view.position(pos); - - // System.out.println("WriteCache, addr: " + offset + ", from: " + - // pos + ", " + md.recordLength + ", thread: " + - // Thread.currentThread().getId()); - /* - * Copy the data into a newly allocated buffer. This is necessary - * because our hold on the backing ByteBuffer for the WriteCache is - * only momentary. As soon as we release() the buffer the data in - * the buffer could be changed. - */ - - final byte[] b = new byte[reclen]; - - final ByteBuffer dst = ByteBuffer.wrap(b); - - // copy the data into [dst] (and the backing byte[]). - dst.put(view); - - // flip buffer for reading. - dst.flip(); - - if (useChecksum) { - - final int chk = tmp.getInt(pos + reclen); - - if (chk != ChecksumUtility.threadChk.get().checksum(b, 0/* offset */, reclen)) { - - // Note: [offset] is a (possibly relative) file offset. - throw new ChecksumError(checkdata()); - - } - - } - - counters.nhit.increment(); - - if (log.isTraceEnabled()) { - log.trace(show(dst, "read bytes")); - } - - return dst; - - } finally { - - release(); - - } - - } - - /** - * Dump some metadata and leading bytes from the buffer onto a - * {@link String}. - * - * @param buf - * The buffer. - * @param prefix - * A prefix for the dump. - * - * @return The {@link String}. - */ - private String show(final ByteBuffer buf, final String prefix) { - final StringBuffer str = new StringBuffer(); - int tpos = buf.position(); - if (tpos == 0) { - tpos = buf.limit(); - } - str.append(prefix + ", length: " + tpos + " : "); - for (int tb = 0; tb < tpos && tb < 20; tb++) { - str.append(Integer.toString(buf.get(tb)) + ","); - } - // log.trace(str.toString()); - return str.toString(); - } - - // private String show(final byte[] buf, int len, final String prefix) { - // final StringBuffer str = new StringBuffer(); - // str.append(prefix + ": "); - // int tpos = len; - // str.append(prefix + ", length: " + tpos + " : "); - // for (int tb = 0; tb < tpos && tb < 20; tb++) { - // str.append(Integer.toString(buf[tb]) + ","); - // } - // // log.trace(str.toString()); - // return str.toString(); - // } - - /** - * Flush the writes to the backing channel but DOES NOT sync the channel and - * DOES NOT {@link #reset()} the {@link WriteCache}. {@link #reset()} is a - * separate operation because a common use is to retain recently flushed - * instances for read-back. - * - * @param force - * When <code>true</code>, the data will be forced to stable - * media. - * - * @throws IOException - * @throws InterruptedException - */ - public void flush(final boolean force) throws IOException, InterruptedException { - - try { - - if (!flush(force, Long.MAX_VALUE, TimeUnit.NANOSECONDS)) { - - throw new RuntimeException(); - - } - - } catch (TimeoutException e) { - - throw new RuntimeException(e); - - } - - } - - /** - * Flush the writes to the backing channel but DOES NOT sync the channel and - * DOES NOT {@link #reset()} the {@link WriteCache}. {@link #reset()} is a - * separate operation because a common use is to retain recently flushed - * instances for read-back. - * - * @param force - * When <code>true</code>, the data will be forced to stable - * media. - * - * @throws IOException - * @throws TimeoutException - * @throws InterruptedException - */ - public boolean flush(final boolean force, final long timeout, final TimeUnit unit) throws IOException, - TimeoutException, InterruptedException { - - // start time - final long begin = System.nanoTime(); - - // total nanoseconds to wait. - final long nanos = unit.toNanos(timeout); - - // remaining nanoseconds to wait. - long remaining = nanos; - - final WriteCacheCounters counters = this.counters.get(); - - final Lock writeLock = lock.writeLock(); - - if (!writeLock.tryLock(remaining, TimeUnit.NANOSECONDS)) { - - return false; - - } - - try { - - final ByteBuffer tmp = this.buf.get().buffer(); - - if (tmp == null) - throw new IllegalStateException(); - - // #of bytes to write on the disk. - final int nbytes = tmp.position(); - - if (log.isTraceEnabled()) { - log.trace("nbytes=" + nbytes + ", firstOffset=" + getFirstOffset() + ", nflush=" + counters.nflush); - } - - if (nbytes == 0) { - - // NOP. - return true; - - } - - /* - * Create a view with same offset, limit and position. - * - * Note: The writeOnChannel method is given the view. This prevents - * it from adjusting the position() on the backing buffer. - */ - { - - final ByteBuffer view = tmp.duplicate(); - - // adjust the view to just the dirty record. - view.limit(nbytes); - view.position(0); - - // remaining := (total - elapsed). - remaining = nanos - (System.nanoTime() - begin); - - // write the data on the disk file. - final boolean ret = writeOnChannel(view, getFirstOffset(), Collections.unmodifiableMap(recordMap), - remaining); - - if (!ret) { - throw new TimeoutException("Unable to flush WriteCache"); - } - - counters.nflush++; - - return ret; - - } - - } finally { - - writeLock.unlock(); - - } - - } - - /** - * Debug routine logs @ ERROR additional information when a checksum error - * has been encountered. - * - * @return An informative error message. - * - * @throws InterruptedException - * @throws IllegalStateException - */ - private String checkdata() throws IllegalStateException, InterruptedException { - - if (!useChecksum) { - return "Unable to check since checksums are not enabled"; - } - - ByteBuffer tmp = acquire(); - try { - int nerrors = 0; - int nrecords = recordMap.size(); - - for (Entry<Long, RecordMetadata> ent : recordMap.entrySet()) { - - final RecordMetadata md = ent.getValue(); - - // length of the record w/o checksum field. - final int reclen = md.recordLength - 4; - - // the start of the record in writeCache. - final int pos = md.bufferOffset; - - final int chk = tmp.getInt(pos + reclen); - - // create a view with same offset, limit and position. - final ByteBuffer view = tmp.duplicate(); - - // adjust the view to just the record of interest. - view.limit(pos + reclen); - view.position(pos); - - final byte[] b = new byte[reclen]; - - final ByteBuffer dst = ByteBuffer.wrap(b); - - // copy the data into [dst] (and the backing byte[]). - dst.put(view); - if (chk != ChecksumUtility.threadChk.get().checksum(b, 0/* offset */, reclen)) { - log.error("Bad data for address: " + ent.getKey()); - nerrors++; - } - - } - return "WriteCache checkdata - records: " + nrecords + ", errors: " + nerrors; - } finally { - release(); - } - } - - /** - * Write the data from the buffer onto the channel. This method provides a - * uniform means to request that the buffer write itself onto the backing - * channel, regardless of whether the channel is backed by a file, a socket, - * etc. - * <p> - * Implementations of this method MAY support gathered writes, depending on - * the channel. The necessary information to perform a gathered write is - * present in the <i>recordMap</i>. On the other hand, the implementation - * MAY require that the records in the cache are laid out for a WORM, in - * which case {@link #getFirstOffset()} provides the starting offset for the - * data to be written. The application MUST coordinate the requirements for - * a R/W or WORM store with the use of the {@link WriteCache} and the means - * to write on the backing channel. - * - * @param buf - * The data to be written. Only the dirty bytes are visible in - * this view. The implementation should write all bytes from the - * current position to the limit. - * @param firstOffset - * The offset of the first record in the recordMap into the file - * (may be relative to a base offset within the file). This is - * provided as an optimization for the WORM which writes its - * records contiguously on the backing store. - * @param recordMap - * The mapping of record offsets onto metadata about those - * records. - * @param nanos - * The timeout for the operation in nanoseconds. - * - * @return <code>true</code> if the operation was completed successfully - * within the time alloted. - * - * @throws InterruptedException - * if the thread was interrupted. - * @throws IOException - * if there was an IO problem. - */ - abstract protected boolean writeOnChannel(final ByteBuffer buf, final long firstOffset, - final Map<Long, RecordMetadata> recordMap, final long nanos) throws InterruptedException, TimeoutException, - IOException; - - /** - * {@inheritDoc}. - * <p> - * This implementation clears the buffer, the record map, and other internal - * metadata such that the {@link WriteCache} is prepared to receive new - * writes. - * - * @throws IllegalStateException - * if the write cache is closed. - */ - public void reset() throws InterruptedException { - - final Lock writeLock = lock.writeLock(); - - writeLock.lockInterruptibly(); - - try { - - // // wait until there are no readers using the buffer. - // latch.await(); - - final ByteBuffer tmp = buf.get().buffer(); - - if (tmp == null) { - - // Already closed. - throw new IllegalStateException(); - - } - - // reset all state. - _resetState(tmp); - - } finally { - - writeLock.unlock(); - - } - - } - - /** - * Permanently take the {@link WriteCache} instance out of service. If the - * buffer was allocated by the {@link WriteCache} then it is released back - * to the {@link DirectBufferPool}. After this method is called, records can - * no longer be read from nor written onto the {@link WriteCache}. It is - * safe to invoke this method more than once. - * <p> - * Concurrent {@link #read(long, int)} requests will be serviced if the - * already hold the the read lock but requests will fail once the - * - * @throws InterruptedException - */ - public void close() throws InterruptedException { - - final Lock writeLock = lock.writeLock(); - - writeLock.lockInterruptibly(); - - try { - - // // wait until there are no readers using the buffer. - // latch.await(); - - /* - * Note: This method is thread safe. Only one thread will manage to - * clear the AtomicReference and it will do the rest of the work as - * well. - */ - - // position := 0; limit := capacity. - final IBufferAccess tmp = buf.get(); - - if (tmp == null) { - - // Already closed. - return; - - } - - if (buf.compareAndSet(tmp/* expected */, null/* update */)) { - - try { - - _resetState(tmp.buffer()); - - } finally { - - if (releaseBuffer) { - - tmp.release(); - - } - - } - - } - - } finally { - - writeLock.unlock(); - - } - - } - - /** - * Reset the internal state of the {@link WriteCache} in preparation to - * reuse it to receive more writes. - * <p> - * Note: Keep private unless strong need for override since you can not call - * this method without holding the write lock - * - * @param tmp - */ - private void _resetState(final ByteBuffer tmp) { - - if (tmp == null) - throw new IllegalArgumentException(); - - // clear the index since all records were flushed to disk. - recordMap.clear(); - - // clear to well known invalid offset. - firstOffset.set(-1L); - - // position := 0; limit := capacity. - tmp.clear(); - - if (checker != null) { - - // reset the running checksum of the data written onto the backing - // buffer. - checker.reset(); - - } - - // Martyn: I moved your debug flag here so it is always cleared by - // reset(). - m_written = false; - - } - - /** - * Return the RMI message object that will accompany the payload from the - * {@link WriteCache} when it is replicated along the write pipeline. - * - * @return cache A {@link WriteCache} to be replicated. - */ - public HAWriteMessage newHAWriteMessage(final long quorumToken) { - - return new HAWriteMessage(bytesWritten(), getWholeBufferChecksum(), - prefixWrites ? StoreTypeEnum.RW : StoreTypeEnum.WORM, - quorumToken, fileExtent.get(), firstOffset.get()); - - } - - /** - * The current performance counters. - */ - protected final AtomicReference<WriteCacheCounters> counters = new AtomicReference<WriteCacheCounters>( - new WriteCacheCounters()); - - /** - * Sets the performance counters to be used by the write cache. A service - * should do this if you want to aggregate the performance counters across - * multiple {@link WriteCache} instances. - * - * @param newVal - * The shared performance counters. - * - * @throws IllegalArgumentException - * if the argument is <code>null</code>. - */ - void setCounters(final WriteCacheCounters newVal) { - - if (newVal == null) - return; - - this.counters.set(newVal); - - } - - /** - * Return the performance counters for the write cacher. - */ - public CounterSet getCounters() { - - return counters.get().getCounters(); - - } - - /** - * Performance counters for the {@link WriteCache}. - * <p> - * Note: thread-safety is required for: {@link #nhit} and {@link #nmiss}. - * The rest should be Ok without additional synchronization, CAS operators, - * etc (mainly because they are updated while holding a lock). - * - * @author <a href="mailto:tho...@us...">Bryan - * Thompson</a> - */ - public static class WriteCacheCounters { - - /* - * read on the cache. - */ - - /** - * #of read requests that are satisfied by the write cache. - */ - public final CAT nhit = new CAT(); - - /** - * The #of read requests that are not satisfied by the write cache. - */ - public final CAT nmiss = new CAT(); - - /* - * write on the cache. - */ - - /** - * #of records accepted for eventual write onto the backing channel. - */ - public long naccept; - - /** - * #of bytes accepted for eventual write onto the backing channel. - */ - public long bytesAccepted; - - /* - * write on the channel. - */ - - /** - * #of times {@link IWriteCache#flush(boolean)} was called. - */ - public long nflush; - - /** - * #of writes on the backing channel. Note that some write cache - * implementations do ordered writes and will therefore do one write per - * record while others do append only and therefore do one write per - * write cache flush. Note that in both cases we may have to redo a - * write if the backing channel was concurrently closed, so the value - * here can diverge from the #of accepted records and the #of requested - * flushes. - */ - public long nwrite; - - /** - * #of bytes written onto the backing channel. - */ - public long bytesWritten; - - /** - * Total elapsed time writing onto the backing channel. - */ - public long elapsedWriteNanos; - - public CounterSet getCounters() { - - final CounterSet root = new CounterSet(); - - /* - * read on the cache. - */ - - root.addCounter("nhit", new Instrument<Long>() { - public void sample() { - setValue(nhit.get()); - } - }); - - root.addCounter("nmiss", new Instrument<Long>() { - public void sample() { - setValue(nmiss.get()); - } - }); - - root.addCounter("hitRate", new Instrument<Double>() { - public void sample() { - final long nhit = WriteCacheCounters.this.nhit.get(); - final long ntests = nhit + WriteCacheCounters.this.nmiss.get(); - setValue(ntests == 0L ? 0d : (double) nhit / ntests); - } - }); - - /* - * write on the cache. - */ - - // #of records accepted by the write cache. - root.addCounter("naccept", new Instrument<Long>() { - public void sample() { - setValue(naccept); - } - }); - - // #of bytes in records accepted by the write cache. - root.addCounter("bytesAccepted", new Instrument<Long>() { - public void sample() { - setValue(bytesAccepted); - } - }); - - /* - * write on the channel. - */ - - // #of times the write cache was flushed to the backing channel. - root.addCounter("nflush", new Instrument<Long>() { - public void sample() { - setValue(nflush); - } - }); - - // #of writes onto the backing channel. - root.addCounter("nwrite", new Instrument<Long>() { - public void sample() { - setValue(nwrite); - } - }); - - // #of bytes written onto the backing channel. - root.addCounter("bytesWritten", new Instrument<Long>() { - public void sample() { - setValue(bytesWritten); - } - }); - - // average bytes per write (will under report if we must retry - // writes). - root.addCounter("bytesPerWrite", new Instrument<Double>() { - public void sample() { - final double bytesPerWrite = (nwrite == 0 ? 0d : (bytesWritten / (double) nwrite)); - setValue(bytesPerWrite); - } - }); - - // elapsed time writing on the backing channel. - root.addCounter("writeSecs", new Instrument<Double>() { - public void sample() { - setValue(elapsedWriteNanos / 1000000000.); - } - }); - - return root; - - } // getCounters() - - public String toString() { - - return getCounters().toString(); - - } - - } // class WriteCacheCounters - - /** - * A {@link WriteCache} implementation suitable for an append-only file such - * as the {@link WORMStrategy} or the output file of the - * {@link IndexSegmentBuilder}. - * - * @author <a href="mailto:tho...@us...">Bryan - * Thompson</a> - */ - public static class FileChannelWriteCache extends WriteCache { - - /** - * An offset which will be applied to each record written onto the - * backing {@link FileChannel}. The offset is generally the size of the - * root blocks for a journal or the checkpoint record for an index - * segment. It can be zero if you do not have anything at the head of - * the file. - * <p> - * Note: This implies that writing the root blocks is done separately in - * the protocol since you can't write below this offset otherwise. - */ - final protected long baseOffset; - - /** - * Used to re-open the {@link FileChannel} in this class. - */ - public final IReopenChannel<FileChannel> opener; - - /** - * @param baseOffset - * An offset - * @param buf - * @param opener - * - * @throws InterruptedException - */ - public FileChannelWriteCache(final long baseOffset, final IBufferAccess buf, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData, final IReopenChannel<FileChannel> opener) - throws InterruptedException { - - super(buf, false/* scatteredWrites */, useChecksum, isHighlyAvailable, bufferHasData); - - if (baseOffset < 0) - throw new IllegalArgumentException(); - - if (opener == null) - throw new IllegalArgumentException(); - - this.baseOffset = baseOffset; - - this.opener = opener; - - } - - @Override - protected boolean writeOnChannel(final ByteBuffer data, final long firstOffset, - final Map<Long, RecordMetadata> recordMap, final long nanos) throws InterruptedException, IOException { - - final long begin = System.nanoTime(); - - final int nbytes = data.remaining(); - - /* - * The position in the file at which the record will be written. - */ - final long pos = baseOffset + firstOffset; - - /* - * Write bytes in [data] from position to limit onto the channel. - * - * @todo This ignores the timeout. - */ - final int nwrites = FileChannelUtility.writeAll(opener, data, pos); - - final WriteCacheCounters counters = this.counters.get(); - counters.nwrite += nwrites; - counters.bytesWritten += nbytes; - counters.elapsedWriteNanos += (System.nanoTime() - begin); - - return true; - - } - - } - - /** - * The scattered write cache is used by the {@link RWStore} since the writes - * can be made to any part of the file assigned for data allocation. - * <p> - * The writeonChannel must therefore utilize the {@link RecordMetadata} to - * write each update separately. - * <p> - * To support HA, we prefix each write with the file position and buffer - * length in the cache. This enables the cache buffer to be sent as a single - * stream and the RecordMap rebuilt downstream. - * - * FIXME Once the file system cache fills up the throughput is much lower - * for the RW mode. Look into putting a thread pool to work on the scattered - * writes. This could be part of a refactor to apply a thread pool to IOs - * and related to prefetch and {@link Memoizer} behaviors. - * - * FIXME To maximize IO rates we should attempt to elide/merge contiguous - * writes. To do this can double-buffer in writeOnChannel. This also - * provides an opportunity to write the full slot size of the RWStore that - * may have advantages, particularly for an SSD, since it may avoid a - * pre-write read to populate the write sector. - */ - public static class FileChannelScatteredWriteCache extends WriteCache { - - /** - * Used to re-open the {@link FileChannel} in this class. - */ - private final IReopenChannel<FileChannel> opener; - - private final BufferedWrite m_bufferedWrite; - /** - * @param baseOffset - * An offset - * @param buf - * @param opener - * - * @throws InterruptedException - */ - public FileChannelScatteredWriteCache(final IBufferAccess buf, final boolean useChecksum, - final boolean isHighlyAvailable, final boolean bufferHasData, final IReopenChannel<FileChannel> opener, - final BufferedWrite bufferedWrite) - throws InterruptedException { - - super(buf, true/* scatteredWrites */, useChecksum, isHighlyAvailable, bufferHasData); - - if (opener == null) - throw new IllegalArgumentException(); - - this.opener = opener; - - m_bufferedWrite = bufferedWrite; - - } - - /** - * Called by WriteCacheService to process a direct write for large - * blocks and also to flush data from dirty caches. - */ - protected boolean writeOnChannel(final ByteBuffer data, - final long firstOffsetIgnored, - final Map<Long, RecordMetadata> recordMap, final long nanos) - throws InterruptedException, IOException { - - final long begin = System.nanoTime(); - - final int nbytes = data.remaining(); - - if (m_written) { - log.warn("DUPLICATE writeOnChannel for : " + this.hashCode()); - } else { - assert !this.isEmpty(); - - m_written = true; - } - - /* - * Retrieve the sorted write iterator and write each block to the - * file. - * - * If there is a BufferedWrite then ensure it is reset. - */ - if (m_bufferedWrite != null) { - m_bufferedWrite.reset(); - } - - int nwrites = 0; - final Iterator<Entry<Long, RecordMetadata>> entries = recordMap.entrySet().iterator(); - while (entries.hasNext()) { - - final Entry<Long, RecordMetadata> entry = entries.next(); - - final RecordMetadata md = entry.getValue(); - - // create a view on record of interest. - final ByteBuffer view = data.duplica... [truncated message content] |
From: <tho...@us...> - 2012-01-20 17:29:25
|
Revision: 5840 http://bigdata.svn.sourceforge.net/bigdata/?rev=5840&view=rev Author: thompsonbry Date: 2012-01-20 17:29:19 +0000 (Fri, 20 Jan 2012) Log Message: ----------- Bug fix. When disabling the recycling of the old bloom filter I accidentally failed to actually turn off the bloom filter. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 16:19:51 UTC (rev 5839) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 17:29:19 UTC (rev 5840) @@ -2036,6 +2036,7 @@ * * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 */ + filter.disable(); // final long curAddr = filter.disable(); // if (curAddr != IRawStore.NULL) // store.delete(curAddr); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 16:19:51 UTC (rev 5839) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 17:29:19 UTC (rev 5840) @@ -2036,6 +2036,7 @@ * * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 */ + filter.disable(); // recycle(filter.disable()); log.warn("Bloom filter disabled - maximum error rate would be exceeded" This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-01-23 12:56:52
|
Revision: 5841 http://bigdata.svn.sourceforge.net/bigdata/?rev=5841&view=rev Author: thompsonbry Date: 2012-01-23 12:56:45 +0000 (Mon, 23 Jan 2012) Log Message: ----------- Enabling recycling of the bloom filters. Javadoc on the ganglia classes. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/ganglia/BigdataGangliaService.java branches/BIGDATA_RELEASE_1_1_0/bigdata-ganglia/src/java/com/bigdata/ganglia/GangliaService.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 17:29:19 UTC (rev 5840) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-23 12:56:45 UTC (rev 5841) @@ -2026,20 +2026,20 @@ * maximum #of index entries for which the bloom filter will * have an acceptable error rate. */ - /* - * TODO The code to recycle the old checkpoint addr, the old - * root addr, and the old bloom filter has been disabled in - * writeCheckpoint2 and AbstractBTree#insert pending the - * resolution of ticket #440. This is being done to minimize - * the likelyhood that the underlying bug for that ticket - * can be tripped by the code. - * - * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 - */ - filter.disable(); -// final long curAddr = filter.disable(); -// if (curAddr != IRawStore.NULL) -// store.delete(curAddr); +// /* +// * TODO The code to recycle the old checkpoint addr, the old +// * root addr, and the old bloom filter has been disabled in +// * writeCheckpoint2 and AbstractBTree#insert pending the +// * resolution of ticket #440. This is being done to minimize +// * the likelyhood that the underlying bug for that ticket +// * can be tripped by the code. +// * +// * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 +// */ +// filter.disable(); + final long curAddr = filter.disable(); + if (curAddr != IRawStore.NULL) + store.delete(curAddr); log.warn("Bloom filter disabled - maximum error rate would be exceeded" + ": entryCount=" Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-20 17:29:19 UTC (rev 5840) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-23 12:56:45 UTC (rev 5841) @@ -933,25 +933,25 @@ * The bloom filter is enabled, is loaded and is dirty, so write * it on the store now. */ - /* - * TODO The code to recycle the old checkpoint addr, the old - * root addr, and the old bloom filter has been disabled in - * writeCheckpoint2 and AbstractBTree#insert pending the - * resolution of ticket #440. This is being done to minimize - * the likelyhood that the underlying bug for that ticket - * can be tripped by the code. - * - * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 - */ -// final long oldAddr = filter.getAddr(); -// if (oldAddr != IRawStore.NULL) { -// this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); -// -// store.delete(oldAddr); -// } - - filter.write(store); +// /* +// * TODO The code to recycle the old checkpoint addr, the old +// * root addr, and the old bloom filter has been disabled in +// * writeCheckpoint2 and AbstractBTree#insert pending the +// * resolution of ticket #440. This is being done to minimize +// * the likelyhood that the underlying bug for that ticket +// * can be tripped by the code. +// * +// * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 +// */ + final long oldAddr = filter.getAddr(); + if (oldAddr != IRawStore.NULL) { + this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); + store.delete(oldAddr); + } +// +// filter.write(store); + } } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-20 17:29:19 UTC (rev 5840) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-23 12:56:45 UTC (rev 5841) @@ -2026,18 +2026,18 @@ * have an acceptable error rate. */ - /* - * TODO The code to recycle the old checkpoint addr, the old - * root addr, and the old bloom filter has been disabled in - * writeCheckpoint2 and AbstractBTree#insert pending the - * resolution of ticket #440. This is being done to minimize - * the likelyhood that the underlying bug for that ticket - * can be tripped by the code. - * - * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 - */ - filter.disable(); -// recycle(filter.disable()); +// /* +// * TODO The code to recycle the old checkpoint addr, the old +// * root addr, and the old bloom filter has been disabled in +// * writeCheckpoint2 and AbstractBTree#insert pending the +// * resolution of ticket #440. This is being done to minimize +// * the likelyhood that the underlying bug for that ticket +// * can be tripped by the code. +// * +// * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 +// */ +// filter.disable(); + recycle(filter.disable()); log.warn("Bloom filter disabled - maximum error rate would be exceeded" + ": entryCount=" Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-20 17:29:19 UTC (rev 5840) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-23 12:56:45 UTC (rev 5841) @@ -894,19 +894,19 @@ * it on the store now. */ - /* - * TODO The code to recycle the old checkpoint addr, the old - * root addr, and the old bloom filter has been disabled in - * writeCheckpoint2 and AbstractBTree#insert pending the - * resolution of ticket #440. This is being done to minimize - * the likelyhood that the underlying bug for that ticket - * can be tripped by the code. - * - * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 - */ -// recycle(filter.getAddr()); - - filter.write(store); +// /* +// * TODO The code to recycle the old checkpoint addr, the old +// * root addr, and the old bloom filter has been disabled in +// * writeCheckpoint2 and AbstractBTree#insert pending the +// * resolution of ticket #440. This is being done to minimize +// * the likelyhood that the underlying bug for that ticket +// * can be tripped by the code. +// * +// * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 +// */ + recycle(filter.getAddr()); +// +// filter.write(store); } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/ganglia/BigdataGangliaService.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/ganglia/BigdataGangliaService.java 2012-01-20 17:29:19 UTC (rev 5840) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/ganglia/BigdataGangliaService.java 2012-01-23 12:56:45 UTC (rev 5841) @@ -28,10 +28,16 @@ * TODO Research how to make ganglia recognize a value which is not * being reported as "not available" rather than just painting the last * reported value. Tmax? DMax? - * - * TODO Can metrics be declared which automatically collect history - * from the sampled counters? It would be nice to abstract that stuff - * out of bigdata. + * + * TODO Can metrics be declared which automatically collect history from + * the sampled counters? It would be nice to abstract that stuff out of + * bigdata. + * + * TODO We should be reporting out the CPU context switches and + * interrupts per second data from vmstat. Ganglia does not collect this + * stuff and it provides interesting insight into the CPU workload and + * instruction stalls, especially when correlated with the application + * workload (load, vs closure, vs query). */ public class BigdataGangliaService extends GangliaService { Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-ganglia/src/java/com/bigdata/ganglia/GangliaService.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-ganglia/src/java/com/bigdata/ganglia/GangliaService.java 2012-01-20 17:29:19 UTC (rev 5840) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-ganglia/src/java/com/bigdata/ganglia/GangliaService.java 2012-01-23 12:56:45 UTC (rev 5841) @@ -52,6 +52,29 @@ * and can register an {@link IGangliaMetricsCollector} to periodically report * performance metrics out to the ganglia network. You can also use * {@link #setMetric(String, Object)} to directly update a metric. + * <p> + * The ganglia protocol replicates soft state into all <code>gmond</code> and + * <code>gmetad</code> instances. The {@link GangliaService} is a full + * participant in the protocol and will also develop a snapshot of the soft + * state of the cluster. This protocol has a lot of benefits, but the metadata + * declarations can stick around and be discovered long after you have shutdown + * the {@link GangliaService}. + * <p> + * When developing with the embedded {@link GangliaService} it is a Good Idea to + * use a private <code>gmond</code> / <code>gmetad</code> configuration + * (non-default address and/or port). Then, if you wind up making some mistakes + * when declaring your application counters and those mistakes replicated into + * the soft state of the ganglia services you can just shutdown your private + * service instances. + * <p> + * Another trick is to use the [mock] mode (it is a constructor argument) while + * you are developing so the {@link GangliaService} does not actually send out + * any packets. + * <p> + * Finally, there is also a utility class (in the test suite) which can be used + * to monitor the ganglia network for packets which can not be round-tripped. + * This can be used to identify bugs if you are doing development on embedded + * {@link GangliaService} code base. * * @see <a href="http://ganglia.sourceforge.net/"> The ganglia monitoring * system</a> @@ -85,9 +108,9 @@ * sample data) can be obtained using * <code>telenet localhost 8649</code>. This is the hook used for the * ganglia web UI, some nagios integrations, etc. - * + * * FIXME Provide a sample collector for the JVM and use it in main(). - * + * * TODO Port the more sophisticated collectors into this module? */ public class GangliaService implements Runnable, IGangliaMetricsReporter { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-01-23 13:20:07
|
Revision: 5842 http://bigdata.svn.sourceforge.net/bigdata/?rev=5842&view=rev Author: thompsonbry Date: 2012-01-23 13:19:56 +0000 (Mon, 23 Jan 2012) Log Message: ----------- Modified the [debug] and [threadLocalBuffers] fields to be initialized from environment variables. The defaults are [false] for both, but now they can be easily changed. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/BigdataStatics.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/BigdataStatics.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/BigdataStatics.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/BigdataStatics.java 2012-01-23 12:56:45 UTC (rev 5841) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/BigdataStatics.java 2012-01-23 13:19:56 UTC (rev 5842) @@ -43,7 +43,7 @@ * of the flag makes it easier to figure out where those {@link System#err} * messages are coming from. This should always be off in the trunk. */ - public static final boolean debug = false; + public static final boolean debug = Boolean.getBoolean("com.bigdata.debug"); /** * The #of lines of output from a child process which will be echoed onto @@ -66,6 +66,7 @@ * cache combined with unbounded thread pools causes effective memory * leak) */ - public static final boolean threadLocalBuffers = false; + public static final boolean threadLocalBuffers = Boolean + .getBoolean("com.bigdata.threadLocalBuffers"); } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/BigdataStatics.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/BigdataStatics.java 2012-01-23 12:56:45 UTC (rev 5841) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/BigdataStatics.java 2012-01-23 13:19:56 UTC (rev 5842) @@ -43,7 +43,7 @@ * of the flag makes it easier to figure out where those {@link System#err} * messages are coming from. This should always be off in the trunk. */ - public static final boolean debug = false; + public static final boolean debug = Boolean.getBoolean("com.bigdata.debug"); /** * The #of lines of output from a child process which will be echoed onto @@ -66,6 +66,7 @@ * cache combined with unbounded thread pools causes effective memory * leak) */ - public static final boolean threadLocalBuffers = false; + public static final boolean threadLocalBuffers = Boolean + .getBoolean("com.bigdata.threadLocalBuffers"); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-01-23 18:08:24
|
Revision: 5847 http://bigdata.svn.sourceforge.net/bigdata/?rev=5847&view=rev Author: thompsonbry Date: 2012-01-23 18:08:17 +0000 (Mon, 23 Jan 2012) Log Message: ----------- Adding release notes for 1.0.4 and 1.1.1 releases. Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt 2012-01-23 18:08:17 UTC (rev 5847) @@ -0,0 +1,114 @@ +This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_3 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +Road map [3]: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- SPARQL 1.1 Update, Property Paths, and Federation support; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + + Note: Versions with (*) require data migration. For details, see [9]. + +1.0.4 + +- http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) +- http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) +- http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) + +1.0.3 + + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata, please see the following links: + +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt 2012-01-23 18:08:17 UTC (rev 5847) @@ -0,0 +1,114 @@ +This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_3 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +Road map [3]: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- SPARQL 1.1 Update, Property Paths, and Federation support; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + + Note: Versions with (*) require data migration. For details, see [9]. + +1.0.4 + +- http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) +- http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) +- http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) + +1.0.3 + + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata, please see the following links: + +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt 2012-01-23 18:08:17 UTC (rev 5847) @@ -0,0 +1,143 @@ +This is a major version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_1_0 + +New features: + +- Fast, scalable native support for SPARQL 1.1 analytic queries; +- %100 Java memory manager leverages the JVM native heap (no GC); +- New extensible hash tree index structure; + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- Fast 100% native SPARQL 1.0 evaluation; +- Integrated "analytic" query package; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +Road map [3]: + +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + + Note: Versions with (*) require data migration. For details, see [9]. + +1.1.1 + +- http://sourceforge.net/apps/trac/bigdata/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) +- http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) +- http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) +- http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) +- http://sourceforge.net/apps/trac/bigdata/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) + +1.1.0 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins) + - http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as "blobs") + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query) + - http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) + - http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics). + - http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) + - http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) + - http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.) + - http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY) + - http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) + - http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) + - http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.) + - http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation) + - http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation) + - http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST) + - http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions) + - http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) + - http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster) + - http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure) + - http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance) + - http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes) + - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) + - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) + - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) + +1.0.3 + + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata(R), please see the following links: + +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata\xAE may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata\xAE RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-01-24 13:39:52
|
Revision: 5860 http://bigdata.svn.sourceforge.net/bigdata/?rev=5860&view=rev Author: thompsonbry Date: 2012-01-24 13:39:42 +0000 (Tue, 24 Jan 2012) Log Message: ----------- Bug fix to actually write out the bloom filter in BTree#writeCheckpoint2(). This bug was introduced when I reapplied the change to recycle the bloom filter. This change applies to 1.1.1 and 1.0.4. Removed use of System.err in TestBTreeWithBloomFilter, at least in the 1.1.1 branch. Javadoc on AbstractBTree#recycle() in the 1.1.1 branch. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/btree/TestBTreeWithBloomFilter.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-24 01:31:52 UTC (rev 5859) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-24 13:39:42 UTC (rev 5860) @@ -949,8 +949,8 @@ store.delete(oldAddr); } -// -// filter.write(store); + + filter.write(store); } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-24 01:31:52 UTC (rev 5859) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2012-01-24 13:39:42 UTC (rev 5860) @@ -4182,17 +4182,29 @@ } + /** + * Recycle (aka delete) the allocation. This method also adjusts the #of + * bytes released in the {@link BTreeCounters}. + * + * @param addr + * The address to be recycled. + * + * @return The #of bytes which were recycled and ZERO (0) if the address is + * {@link IRawStore#NULL}. + */ protected int recycle(final long addr) { - if (addr != IRawStore.NULL) { - final int nbytes = store.getByteCount(addr); - getBtreeCounters().bytesReleased += nbytes; - - store.delete(addr); - - return nbytes; - } else { + + if (addr == IRawStore.NULL) return 0; - } + + final int nbytes = store.getByteCount(addr); + + getBtreeCounters().bytesReleased += nbytes; + + store.delete(addr); + + return nbytes; + } } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-24 01:31:52 UTC (rev 5859) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/btree/BTree.java 2012-01-24 13:39:42 UTC (rev 5860) @@ -905,8 +905,8 @@ // * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 // */ recycle(filter.getAddr()); -// -// filter.write(store); + + filter.write(store); } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/btree/TestBTreeWithBloomFilter.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/btree/TestBTreeWithBloomFilter.java 2012-01-24 01:31:52 UTC (rev 5859) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/btree/TestBTreeWithBloomFilter.java 2012-01-24 13:39:42 UTC (rev 5860) @@ -136,7 +136,8 @@ assertNotNull(btree.getBloomFilter()); // show the filter state. - System.err.println(btree.getBloomFilter().toString()); + if(log.isInfoEnabled()) + log.info(btree.getBloomFilter().toString()); final byte[] k0 = new byte[]{0}; final byte[] k1 = new byte[]{1}; @@ -160,13 +161,15 @@ assertTrue(btree.getBloomFilter().contains(k0)); // show the filter state. - System.err.println(btree.getBloomFilter().toString()); + if(log.isInfoEnabled()) + log.info(btree.getBloomFilter().toString()); // remove the index entry. assertEquals(k0,btree.remove(k0)); // show the filter state. - System.err.println(btree.getBloomFilter().toString()); + if(log.isInfoEnabled()) + log.info(btree.getBloomFilter().toString()); // verify at BTree API (contains) assertFalse(btree.contains(k1)); @@ -207,7 +210,7 @@ // assertNotNull(btree.getBloomFilter()); // // // show the filter state. -// System.err.println(btree.getBloomFilter().toString()); +// if(log.isInfoEnabled()) log.info(btree.getBloomFilter().toString()); // // final byte[] k0 = new byte[]{0}; // final byte[] k1 = new byte[]{1}; @@ -231,7 +234,7 @@ // assertTrue(btree.getBloomFilter().contains(k0)); // // // show the filter state. -// System.err.println(btree.getBloomFilter().toString()); +// if(log.isInfoEnabled()) log.info(btree.getBloomFilter().toString()); // // /* // * verify that the normal iterator is used for [k0] since the filter @@ -285,7 +288,8 @@ btree.getRoot(); // show the filter state. - System.err.println(btree.getBloomFilter().toString()); + if(log.isInfoEnabled()) + log.info(btree.getBloomFilter().toString()); final byte[] k0 = new byte[]{0}; final byte[] k1 = new byte[]{1}; @@ -306,7 +310,8 @@ assertTrue(btree.getBloomFilter().contains(k0)); // show the filter state. - System.err.println("before checkpoint: "+btree.getBloomFilter()); + if(log.isInfoEnabled()) + log.info("before checkpoint: "+btree.getBloomFilter()); // write a checkpoint (should force the bloom filter to the store). final long addrCheckpoint = btree.writeCheckpoint(); @@ -320,7 +325,8 @@ // load the checkpoint record. final Checkpoint checkpoint = Checkpoint.load(store, addrCheckpoint); - System.err.println(checkpoint.toString()); + if(log.isInfoEnabled()) + log.info(checkpoint.toString()); // assert bloom filter address is defined. assertNotSame(0L, checkpoint.getBloomFilterAddr()); @@ -329,7 +335,8 @@ final BloomFilter bloomFilter = (BloomFilter) SerializerUtil .deserialize(store.read(checkpoint.getBloomFilterAddr())); - System.err.println("as read from store: "+bloomFilter); + if(log.isInfoEnabled()) + log.info("as read from store: "+bloomFilter); /* * Verify that we read in a bloom filter instance that has the same @@ -340,7 +347,8 @@ assertTrue(bloomFilter.contains(k0)); // show the filter state. - System.err.println(btree.getBloomFilter().toString()); + if(log.isInfoEnabled()) + log.info(btree.getBloomFilter().toString()); } @@ -361,7 +369,8 @@ assertNotNull(btree.getBloomFilter()); // show the filter state. - System.err.println(btree.getBloomFilter().toString()); + if(log.isInfoEnabled()) + log.info(btree.getBloomFilter().toString()); /* * Verify that the auto-magical reappearance of the bloom filter @@ -578,7 +587,8 @@ maxN = factory.maxN; - System.err.println("factory="+factory); + if(log.isInfoEnabled()) + log.info("factory="+factory); btree = BTree.create(store, metadata); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-01-25 15:42:53
|
Revision: 5877 http://bigdata.svn.sourceforge.net/bigdata/?rev=5877&view=rev Author: thompsonbry Date: 2012-01-25 15:42:42 +0000 (Wed, 25 Jan 2012) Log Message: ----------- Updating the release notes Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt 2012-01-25 15:41:39 UTC (rev 5876) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_4.txt 2012-01-25 15:42:42 UTC (rev 5877) @@ -12,7 +12,7 @@ You can checkout this release from: -https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_3 +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 Feature summary: Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt 2012-01-25 15:41:39 UTC (rev 5876) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_4.txt 2012-01-25 15:42:42 UTC (rev 5877) @@ -12,7 +12,7 @@ You can checkout this release from: -https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_3 +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 Feature summary: Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt 2012-01-25 15:41:39 UTC (rev 5876) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_1_1.txt 2012-01-25 15:42:42 UTC (rev 5877) @@ -12,7 +12,7 @@ You can checkout this release from: -https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_1_0 +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_1_1 New features: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mar...@us...> - 2012-01-26 17:36:05
|
Revision: 5886 http://bigdata.svn.sourceforge.net/bigdata/?rev=5886&view=rev Author: martyncutcher Date: 2012-01-26 17:35:54 +0000 (Thu, 26 Jan 2012) Log Message: ----------- Fixes problem when a deferredFree block requires a bob allocation. This was not handled when immediately freeing the block after releasing the allocations referenced within it. Ticket #453 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-01-26 17:18:42 UTC (rev 5885) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-01-26 17:35:54 UTC (rev 5886) @@ -273,7 +273,8 @@ * @see #ALLOCATION_SIZES */ //String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128"; - String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128, 192, 320, 512, 832, 1344, 2176, 3520"; + // String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128, 192, 320, 512, 832, 1344, 2176, 3520"; + String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128, 192, 320, 512"; // private static final int[] DEFAULT_ALLOC_SIZES = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181 }; // private static final int[] ALLOC_SIZES = { 1, 2, 4, 8, 16, 32, 64, 128 }; @@ -1725,7 +1726,7 @@ } private boolean freeBlob(final int hdr_addr, final int sze, final IAllocationContext context) { - if (sze < (m_maxFixedAlloc-4)) + if (sze <= (m_maxFixedAlloc-4)) throw new IllegalArgumentException("Unexpected address size"); if (m_storageStats != null) { @@ -1757,6 +1758,44 @@ } } + private boolean freeImmediateBlob(final int hdr_addr, final int sze) { + if (sze <= (m_maxFixedAlloc-4)) + throw new IllegalArgumentException("Unexpected address size"); + + if (m_storageStats != null) { + m_storageStats.deleteBlob(sze); + } + + final int alloc = m_maxFixedAlloc-4; + final int blcks = (alloc - 1 + sze)/alloc; + + // read in header block, then free each reference + final byte[] hdr = new byte[(blcks+1) * 4 + 4]; // add space for checksum + getData(hdr_addr, hdr); + + final DataInputStream instr = new DataInputStream( + new ByteArrayInputStream(hdr, 0, hdr.length-4) ); + + // retain lock for all frees + m_allocationLock.lock(); + try { + final int allocs = instr.readInt(); + int rem = sze; + for (int i = 0; i < allocs; i++) { + final int nxt = instr.readInt(); + immediateFree(nxt, rem <= alloc ? rem : alloc); + rem -= alloc; + } + immediateFree(hdr_addr, hdr.length); + + return true; + } catch (IOException ioe) { + throw new RuntimeException(ioe); + } finally { + m_allocationLock.unlock(); + } + } + // private long immediateFreeCount = 0; private void immediateFree(final int addr, final int sze) { immediateFree(addr, sze, false); @@ -1771,6 +1810,12 @@ return; } + if (sze > (this.m_maxFixedAlloc-4)) { + freeImmediateBlob(addr, sze); + + return; + } + m_allocationLock.lock(); try { final FixedAllocator alloc = getBlockByAddress(addr); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2012-01-26 17:18:42 UTC (rev 5885) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2012-01-26 17:35:54 UTC (rev 5886) @@ -1026,6 +1026,61 @@ } /** + * This test releases over a blobs worth of deferred frees + */ + public void test_blobDeferredFrees() { + Journal store = (Journal) getStore(5); + try { + + RWStrategy bs = (RWStrategy) store.getBufferStrategy(); + + ArrayList<Long> addrs = new ArrayList<Long>(); + for (int i = 0; i < 4000; i++) { + addrs.add(bs.write(randomData(45))); + } + store.commit(); + + Thread.currentThread().sleep(5000); + + for (long addr : addrs) { + bs.delete(addr); + } + // assertTrue(bs.isCommitted(addrs.get(0))); + + store.commit(); + + // modify store but do not allocate similar size block + // as that we want to see has been removed + final long addr2 = bs.write(randomData(220)); // modify store + + store.commit(); + bs.delete(addr2); // modify store + store.commit(); + + // delete is actioned + assertFalse(false /* bs.isCommitted(addrs.get(0)) */); + } catch (InterruptedException e) { + + } finally { + store.destroy(); + } + } + + ByteBuffer randomData(final int sze) { + byte[] buf = new byte[sze + 4]; // extra for checksum + r.nextBytes(buf); + + return ByteBuffer.wrap(buf, 0, sze); + } + + byte[] randomBytes(final int sze) { + byte[] buf = new byte[sze + 4]; // extra for checksum + r.nextBytes(buf); + + return buf; + } + + /** * Test of blob allocation, does not check on read back, just the * allocation */ @@ -1110,7 +1165,6 @@ test_blob_readBack(); } } - /** * Test of blob allocation and read-back, firstly from cache and then * from disk. Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-01-26 17:18:42 UTC (rev 5885) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-01-26 17:35:54 UTC (rev 5886) @@ -273,7 +273,8 @@ * @see #ALLOCATION_SIZES */ //String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128"; - String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128, 192, 320, 512, 832, 1344, 2176, 3520"; + // String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128, 192, 320, 512, 832, 1344, 2176, 3520"; + String DEFAULT_ALLOCATION_SIZES = "1, 2, 3, 5, 8, 12, 16, 32, 48, 64, 128, 192, 320, 512"; // private static final int[] DEFAULT_ALLOC_SIZES = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181 }; // private static final int[] ALLOC_SIZES = { 1, 2, 4, 8, 16, 32, 64, 128 }; @@ -1757,6 +1758,44 @@ } } + private boolean freeImmediateBlob(final int hdr_addr, final int sze) { + if (sze <= (m_maxFixedAlloc-4)) + throw new IllegalArgumentException("Unexpected address size"); + + if (m_storageStats != null) { + m_storageStats.deleteBlob(sze); + } + + final int alloc = m_maxFixedAlloc-4; + final int blcks = (alloc - 1 + sze)/alloc; + + // read in header block, then free each reference + final byte[] hdr = new byte[(blcks+1) * 4 + 4]; // add space for checksum + getData(hdr_addr, hdr); + + final DataInputStream instr = new DataInputStream( + new ByteArrayInputStream(hdr, 0, hdr.length-4) ); + + // retain lock for all frees + m_allocationLock.lock(); + try { + final int allocs = instr.readInt(); + int rem = sze; + for (int i = 0; i < allocs; i++) { + final int nxt = instr.readInt(); + immediateFree(nxt, rem <= alloc ? rem : alloc); + rem -= alloc; + } + immediateFree(hdr_addr, hdr.length); + + return true; + } catch (IOException ioe) { + throw new RuntimeException(ioe); + } finally { + m_allocationLock.unlock(); + } + } + // private long immediateFreeCount = 0; private void immediateFree(final int addr, final int sze) { immediateFree(addr, sze, false); @@ -1771,6 +1810,12 @@ return; } + if (sze > (this.m_maxFixedAlloc-4)) { + freeImmediateBlob(addr, sze); + + return; + } + m_allocationLock.lock(); try { final FixedAllocator alloc = getBlockByAddress(addr); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2012-01-26 17:18:42 UTC (rev 5885) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2012-01-26 17:35:54 UTC (rev 5886) @@ -1026,6 +1026,61 @@ } /** + * This test releases over a blobs worth of deferred frees + */ + public void test_blobDeferredFrees() { + Journal store = (Journal) getStore(5); + try { + + RWStrategy bs = (RWStrategy) store.getBufferStrategy(); + + ArrayList<Long> addrs = new ArrayList<Long>(); + for (int i = 0; i < 4000; i++) { + addrs.add(bs.write(randomData(45))); + } + store.commit(); + + Thread.currentThread().sleep(5000); + + for (long addr : addrs) { + bs.delete(addr); + } + // assertTrue(bs.isCommitted(addrs.get(0))); + + store.commit(); + + // modify store but do not allocate similar size block + // as that we want to see has been removed + final long addr2 = bs.write(randomData(220)); // modify store + + store.commit(); + bs.delete(addr2); // modify store + store.commit(); + + // delete is actioned + assertFalse(false /* bs.isCommitted(addrs.get(0)) */); + } catch (InterruptedException e) { + + } finally { + store.destroy(); + } + } + + ByteBuffer randomData(final int sze) { + byte[] buf = new byte[sze + 4]; // extra for checksum + r.nextBytes(buf); + + return ByteBuffer.wrap(buf, 0, sze); + } + + byte[] randomBytes(final int sze) { + byte[] buf = new byte[sze + 4]; // extra for checksum + r.nextBytes(buf); + + return buf; + } + + /** * Test of blob allocation, does not check on read back, just the * allocation */ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-01-26 21:14:32
|
Revision: 5890 http://bigdata.svn.sourceforge.net/bigdata/?rev=5890&view=rev Author: thompsonbry Date: 2012-01-26 21:14:25 +0000 (Thu, 26 Jan 2012) Log Message: ----------- This was not fixed correctly the first time as can be seen in the following stack trace. {{{ Exception in thread "main" java.lang.RuntimeException: java.lang.NoSuchMethodError: org.apache.log4j.Logger.setLevel(Lorg/apache/log4j/Level;)V at com.bigdata.Banner.setDefaultLogLevel(Banner.java:217) at com.bigdata.Banner.banner(Banner.java:142) at com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1285) at com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:879) at com.bigdata.journal.Journal.<init>(Journal.java:235) at com.bigdata.journal.Journal.<init>(Journal.java:228) at com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:686) at com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:665) }}} The problem is: NoSuchMethodException - vs - NoSuchMethodError It is trapping the former, but the latter is being thrown. @see https://sourceforge.net/apps/trac/bigdata/ticket/362 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Banner.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/Banner.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Banner.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Banner.java 2012-01-26 18:44:59 UTC (rev 5889) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Banner.java 2012-01-26 21:14:25 UTC (rev 5890) @@ -205,7 +205,7 @@ * * @see https://sourceforge.net/apps/trac/bigdata/ticket/362 */ - if (InnerCause.isInnerCause(t, NoSuchMethodException.class)) { + if (InnerCause.isInnerCause(t, NoSuchMethodError.class)) { log.error("Unable to raise the default log level to WARN." + " Logging is NOT properly configured." Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/Banner.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/Banner.java 2012-01-26 18:44:59 UTC (rev 5889) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/Banner.java 2012-01-26 21:14:25 UTC (rev 5890) @@ -205,7 +205,7 @@ * * @see https://sourceforge.net/apps/trac/bigdata/ticket/362 */ - if (InnerCause.isInnerCause(t, NoSuchMethodException.class)) { + if (InnerCause.isInnerCause(t, NoSuchMethodError.class)) { log.error("Unable to raise the default log level to WARN." + " Logging is NOT properly configured." This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-01 17:21:02
|
Revision: 5939 http://bigdata.svn.sourceforge.net/bigdata/?rev=5939&view=rev Author: thompsonbry Date: 2012-02-01 17:20:50 +0000 (Wed, 01 Feb 2012) Log Message: ----------- Hooked the TestMROWTransactionsNoHistory and WithHistory test suites into TestBigdataSailWithQuads (it was only running the older TestMROWTransactions which is now an abstract class). Added a stress test variant with history into TestMROWTransactionsWithHistory. @see https://sourceforge.net/apps/trac/bigdata/ticket/440 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2012-02-01 16:55:03 UTC (rev 5938) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2012-02-01 17:20:50 UTC (rev 5939) @@ -113,7 +113,8 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacks.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTx.class); - suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactions.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsNoHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsWithHistory.class); suite.addTestSuite(com.bigdata.rdf.sail.TestMillisecondPrecisionForInlineDateTimes.class); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 16:55:03 UTC (rev 5938) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 17:20:50 UTC (rev 5939) @@ -1,5 +1,7 @@ package com.bigdata.rdf.sail; +import java.util.Random; + public class TestMROWTransactionsWithHistory extends TestMROWTransactions { public TestMROWTransactionsWithHistory() { @@ -17,4 +19,21 @@ domultiple_csem_transaction_onethread(1); } + public void test_multiple_csem_transaction_withHhistory_stress() throws Exception { + + final Random r = new Random(); + + for (int i = 0; i < 100; i++) { + + final int nreaderThreads = r.nextInt(19) + 1; + + log.warn("Trial: " + i + ", nreaderThreads=" + nreaderThreads); + + domultiple_csem_transaction2(1/* retentionMillis */, + nreaderThreads, 20/* nwriters */, 400/* nreaders */); + + } + + } + } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2012-02-01 16:55:03 UTC (rev 5938) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2012-02-01 17:20:50 UTC (rev 5939) @@ -104,8 +104,9 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacks.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTx.class); - suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactions.class); - + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsNoHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsWithHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMillisecondPrecisionForInlineDateTimes.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 16:55:03 UTC (rev 5938) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 17:20:50 UTC (rev 5939) @@ -1,5 +1,7 @@ package com.bigdata.rdf.sail; +import java.util.Random; + public class TestMROWTransactionsWithHistory extends TestMROWTransactions { public TestMROWTransactionsWithHistory() { @@ -17,4 +19,21 @@ domultiple_csem_transaction_onethread(1); } + public void test_multiple_csem_transaction_withHhistory_stress() throws Exception { + + final Random r = new Random(); + + for (int i = 0; i < 100; i++) { + + final int nreaderThreads = r.nextInt(19) + 1; + + log.warn("Trial: " + i + ", nreaderThreads=" + nreaderThreads); + + domultiple_csem_transaction2(1/* retentionMillis */, + nreaderThreads, 20/* nwriters */, 400/* nreaders */); + + } + + } + } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-01 18:57:17
|
Revision: 5941 http://bigdata.svn.sourceforge.net/bigdata/?rev=5941&view=rev Author: thompsonbry Date: 2012-02-01 18:57:10 +0000 (Wed, 01 Feb 2012) Log Message: ----------- Commented out the extremely long running variants of this test in favor of the version which runs 100 passes with a random #of readers in (1:20). @see https://sourceforge.net/apps/trac/bigdata/ticket/440 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-01 17:39:09 UTC (rev 5940) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-01 18:57:10 UTC (rev 5941) @@ -57,23 +57,23 @@ super.tearDown(); } - // similar to test_multiple_transactions but uses direct AbsractTripleStore - // manipulations rather than RepositoryConnections - public void test_multiple_csem_transaction_nohistory() throws Exception { - -// domultiple_csem_transaction(0); - - domultiple_csem_transaction2(0/* retentionMillis */, - 2/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); - - } - - public void test_multiple_csem_transaction_nohistory_oneReaderThread() throws Exception { - - domultiple_csem_transaction2(0/* retentionMillis */, - 1/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); - - } +// // similar to test_multiple_transactions but uses direct AbsractTripleStore +// // manipulations rather than RepositoryConnections +// public void test_multiple_csem_transaction_nohistory() throws Exception { +// +//// domultiple_csem_transaction(0); +// +// domultiple_csem_transaction2(0/* retentionMillis */, +// 2/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); +// +// } +// +// public void test_multiple_csem_transaction_nohistory_oneReaderThread() throws Exception { +// +// domultiple_csem_transaction2(0/* retentionMillis */, +// 1/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); +// +// } public void test_multiple_csem_transaction_nohistory_stress() throws Exception { @@ -92,44 +92,44 @@ } - public void notest_stress_multiple_csem_transaction_nohistory() throws Exception { - - final int retentionMillis = 0; - - for (int i = 0; i< 50; i++) { - - domultiple_csem_transaction2(retentionMillis, 2/* nreaderThreads */, - 1000/* nwriters */, 20 * 1000/* nreaders */); - - } - - } - - public void test_multiple_csem_transaction_onethread_nohistory() throws Exception { - - domultiple_csem_transaction_onethread(0); - - } - -// Open a read committed transaction - //do reads - //do write without closing read - //commit write - //close read - //repeat - public void notest_multiple_csem_transaction_onethread_nohistory_debug() throws Exception { - PseudoRandom r = new PseudoRandom(2000); - - for (int run = 0; run < 200; run++) { - final int uris = 1 + r.nextInt(599); - final int preds = 1 + r.nextInt(49); - try { - System.err.println("Testing with " + uris + " uris, " + preds + " preds"); - domultiple_csem_transaction_onethread(0, uris, preds); - } catch (Exception e) { - System.err.println("problem with " + uris + " uris, " + preds + " preds"); - throw e; - } - } - } +// public void notest_stress_multiple_csem_transaction_nohistory() throws Exception { +// +// final int retentionMillis = 0; +// +// for (int i = 0; i< 50; i++) { +// +// domultiple_csem_transaction2(retentionMillis, 2/* nreaderThreads */, +// 1000/* nwriters */, 20 * 1000/* nreaders */); +// +// } +// +// } +// +// public void test_multiple_csem_transaction_onethread_nohistory() throws Exception { +// +// domultiple_csem_transaction_onethread(0); +// +// } +// +//// Open a read committed transaction +// //do reads +// //do write without closing read +// //commit write +// //close read +// //repeat +// public void notest_multiple_csem_transaction_onethread_nohistory_debug() throws Exception { +// PseudoRandom r = new PseudoRandom(2000); +// +// for (int run = 0; run < 200; run++) { +// final int uris = 1 + r.nextInt(599); +// final int preds = 1 + r.nextInt(49); +// try { +// System.err.println("Testing with " + uris + " uris, " + preds + " preds"); +// domultiple_csem_transaction_onethread(0, uris, preds); +// } catch (Exception e) { +// System.err.println("problem with " + uris + " uris, " + preds + " preds"); +// throw e; +// } +// } +// } } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 17:39:09 UTC (rev 5940) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 18:57:10 UTC (rev 5941) @@ -11,15 +11,15 @@ super(arg0); } - public void test_multiple_csem_transaction_withHistory() throws Exception { - domultiple_csem_transaction(1); - } +// public void test_multiple_csem_transaction_withHistory() throws Exception { +// domultiple_csem_transaction(1); +// } +// +// public void test_multiple_csem_transaction_onethread_withHistory() throws Exception { +// domultiple_csem_transaction_onethread(1); +// } - public void test_multiple_csem_transaction_onethread_withHistory() throws Exception { - domultiple_csem_transaction_onethread(1); - } - - public void test_multiple_csem_transaction_withHhistory_stress() throws Exception { + public void test_multiple_csem_transaction_withHistory_stress() throws Exception { final Random r = new Random(); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-01 17:39:09 UTC (rev 5940) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-01 18:57:10 UTC (rev 5941) @@ -57,23 +57,23 @@ super.tearDown(); } - // similar to test_multiple_transactions but uses direct AbsractTripleStore - // manipulations rather than RepositoryConnections - public void test_multiple_csem_transaction_nohistory() throws Exception { - -// domultiple_csem_transaction(0); - - domultiple_csem_transaction2(0/* retentionMillis */, - 2/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); - - } - - public void test_multiple_csem_transaction_nohistory_oneReaderThread() throws Exception { - - domultiple_csem_transaction2(0/* retentionMillis */, - 1/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); - - } +// // similar to test_multiple_transactions but uses direct AbsractTripleStore +// // manipulations rather than RepositoryConnections +// public void test_multiple_csem_transaction_nohistory() throws Exception { +// +//// domultiple_csem_transaction(0); +// +// domultiple_csem_transaction2(0/* retentionMillis */, +// 2/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); +// +// } +// +// public void test_multiple_csem_transaction_nohistory_oneReaderThread() throws Exception { +// +// domultiple_csem_transaction2(0/* retentionMillis */, +// 1/* nreaderThreads */, 1000/* nwriters */, 20 * 1000/* nreaders */); +// +// } public void test_multiple_csem_transaction_nohistory_stress() throws Exception { @@ -92,44 +92,44 @@ } - public void notest_stress_multiple_csem_transaction_nohistory() throws Exception { - - final int retentionMillis = 0; - - for (int i = 0; i< 50; i++) { - - domultiple_csem_transaction2(retentionMillis, 2/* nreaderThreads */, - 1000/* nwriters */, 20 * 1000/* nreaders */); - - } - - } - - public void test_multiple_csem_transaction_onethread_nohistory() throws Exception { - - domultiple_csem_transaction_onethread(0); - - } - -// Open a read committed transaction - //do reads - //do write without closing read - //commit write - //close read - //repeat - public void notest_multiple_csem_transaction_onethread_nohistory_debug() throws Exception { - PseudoRandom r = new PseudoRandom(2000); - - for (int run = 0; run < 200; run++) { - final int uris = 1 + r.nextInt(599); - final int preds = 1 + r.nextInt(49); - try { - System.err.println("Testing with " + uris + " uris, " + preds + " preds"); - domultiple_csem_transaction_onethread(0, uris, preds); - } catch (Exception e) { - System.err.println("problem with " + uris + " uris, " + preds + " preds"); - throw e; - } - } - } +// public void notest_stress_multiple_csem_transaction_nohistory() throws Exception { +// +// final int retentionMillis = 0; +// +// for (int i = 0; i< 50; i++) { +// +// domultiple_csem_transaction2(retentionMillis, 2/* nreaderThreads */, +// 1000/* nwriters */, 20 * 1000/* nreaders */); +// +// } +// +// } +// +// public void test_multiple_csem_transaction_onethread_nohistory() throws Exception { +// +// domultiple_csem_transaction_onethread(0); +// +// } +// +//// Open a read committed transaction +// //do reads +// //do write without closing read +// //commit write +// //close read +// //repeat +// public void notest_multiple_csem_transaction_onethread_nohistory_debug() throws Exception { +// PseudoRandom r = new PseudoRandom(2000); +// +// for (int run = 0; run < 200; run++) { +// final int uris = 1 + r.nextInt(599); +// final int preds = 1 + r.nextInt(49); +// try { +// System.err.println("Testing with " + uris + " uris, " + preds + " preds"); +// domultiple_csem_transaction_onethread(0, uris, preds); +// } catch (Exception e) { +// System.err.println("problem with " + uris + " uris, " + preds + " preds"); +// throw e; +// } +// } +// } } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 17:39:09 UTC (rev 5940) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-01 18:57:10 UTC (rev 5941) @@ -11,15 +11,15 @@ super(arg0); } - public void test_multiple_csem_transaction_withHistory() throws Exception { - domultiple_csem_transaction(1); - } +// public void test_multiple_csem_transaction_withHistory() throws Exception { +// domultiple_csem_transaction(1); +// } +// +// public void test_multiple_csem_transaction_onethread_withHistory() throws Exception { +// domultiple_csem_transaction_onethread(1); +// } - public void test_multiple_csem_transaction_onethread_withHistory() throws Exception { - domultiple_csem_transaction_onethread(1); - } - - public void test_multiple_csem_transaction_withHhistory_stress() throws Exception { + public void test_multiple_csem_transaction_withHistory_stress() throws Exception { final Random r = new Random(); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-01 21:43:50
|
Revision: 5942 http://bigdata.svn.sourceforge.net/bigdata/?rev=5942&view=rev Author: thompsonbry Date: 2012-02-01 21:43:44 +0000 (Wed, 01 Feb 2012) Log Message: ----------- The new versions of TestMROWTransactions were not integrations into the triples and sids mode SAIL test suites. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2012-02-01 18:57:10 UTC (rev 5941) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2012-02-01 21:43:44 UTC (rev 5942) @@ -95,7 +95,8 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacks.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTx.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTM.class); - suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactions.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsNoHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsWithHistory.class); suite.addTestSuite(com.bigdata.rdf.sail.TestMillisecondPrecisionForInlineDateTimes.class); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2012-02-01 18:57:10 UTC (rev 5941) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2012-02-01 21:43:44 UTC (rev 5942) @@ -89,8 +89,9 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacks.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTx.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTM.class); - suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactions.class); - + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsNoHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsWithHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMillisecondPrecisionForInlineDateTimes.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2012-02-01 18:57:10 UTC (rev 5941) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2012-02-01 21:43:44 UTC (rev 5942) @@ -88,8 +88,9 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacks.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTx.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTM.class); - suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactions.class); - + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsNoHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsWithHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMillisecondPrecisionForInlineDateTimes.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2012-02-01 18:57:10 UTC (rev 5941) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2012-02-01 21:43:44 UTC (rev 5942) @@ -82,7 +82,8 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacks.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTx.class); suite.addTestSuite(com.bigdata.rdf.sail.TestRollbacksTM.class); - suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactions.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsNoHistory.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestMROWTransactionsWithHistory.class); suite.addTestSuite(com.bigdata.rdf.sail.TestMillisecondPrecisionForInlineDateTimes.class); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 12:12:30
|
Revision: 5943 http://bigdata.svn.sourceforge.net/bigdata/?rev=5943&view=rev Author: thompsonbry Date: 2012-02-02 12:12:19 +0000 (Thu, 02 Feb 2012) Log Message: ----------- lowered txLog for abort to @ INFO Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-01 21:43:44 UTC (rev 5942) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-02 12:12:19 UTC (rev 5943) @@ -2080,7 +2080,7 @@ } - txLog.warn("ABORT"); + txLog.info("ABORT"); if (LRUNexus.INSTANCE != null) { Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-01 21:43:44 UTC (rev 5942) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-02 12:12:19 UTC (rev 5943) @@ -2080,7 +2080,7 @@ } - txLog.warn("ABORT"); + txLog.info("ABORT"); if (LRUNexus.INSTANCE != null) { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 12:21:09
|
Revision: 5944 http://bigdata.svn.sourceforge.net/bigdata/?rev=5944&view=rev Author: thompsonbry Date: 2012-02-02 12:20:58 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Modified the stress test to include the IIndexManager object in the report. There are some indications that the recently observed failures might be in the EmbeddedFederation mode and not related to the RWStore at all. @see https://sourceforge.net/apps/trac/bigdata/ticket/467 (IllegalStateException trying to access lexicon index using RWStore with recycling) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 12:12:19 UTC (rev 5943) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 12:20:58 UTC (rev 5944) @@ -288,11 +288,12 @@ if (!success.get()) { final Throwable ex = failex.get(); if (ex != null) { - fail("Test failed: firstCause=" + ex - + ", retentionMillis=" + retentionMillis - + ", nreaderThreads=" + nreaderThreads - + ", nwriters=" + nwriters + ", nreaders=" - + nreaders, ex); + fail("Test failed: firstCause=" + ex + + ", retentionMillis=" + retentionMillis + + ", nreaderThreads=" + nreaderThreads + + ", nwriters=" + nwriters + ", nreaders=" + + nreaders + ", indexManager=" + + repo.getDatabase().getIndexManager(), ex); } } if (log.isInfoEnabled()) Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 12:12:19 UTC (rev 5943) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 12:20:58 UTC (rev 5944) @@ -292,7 +292,8 @@ + ", retentionMillis=" + retentionMillis + ", nreaderThreads=" + nreaderThreads + ", nwriters=" + nwriters + ", nreaders=" - + nreaders, ex); + + nreaders + ", indexManager=" + + repo.getDatabase().getIndexManager(), ex); } } if (log.isInfoEnabled()) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 13:04:27
|
Revision: 5945 http://bigdata.svn.sourceforge.net/bigdata/?rev=5945&view=rev Author: thompsonbry Date: 2012-02-02 13:04:16 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Modified RWStore to enable the assert that commitPointsRecycled == commitPointsRemoved. Added assert to RWStore that the lastCommitTime is strictly LT the latestReleasableTime (ie., that we never release the lastCommitTime). Modified TestMROWTransactionsWithHistory to use a random retention time. The retention time range is (1,11,21,...,91). @see https://sourceforge.net/apps/trac/bigdata/ticket/467 (IllegalStateException trying to access lexicon index using RWStore with recycling) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 12:20:58 UTC (rev 5944) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 13:04:16 UTC (rev 5945) @@ -2476,6 +2476,14 @@ */ final long latestReleasableTime = transactionService.getReleaseTime(); + if (lastCommitTime <= latestReleasableTime) { + throw new AssertionError("lastCommitTime=" + lastCommitTime + + ", latestReleasableTime=" + latestReleasableTime + + ", lastDeferredReleaseTime=" + + m_lastDeferredReleaseTime + ", activeTxCount=" + + m_activeTxCount); + } + // Note: This is longer true. Delete blocks are attached to the // commit point in which the deletes were made. // /* @@ -3689,9 +3697,9 @@ } /* - * FIXME Restore this code when addressing + * * - * https://sourceforge.net/apps/trac/bigdata/ticket/440 + * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 */ // // Now remove the commit record entries from the commit record index. @@ -3705,7 +3713,10 @@ + ", commitPointsRemoved=" + commitPointsRemoved ); -// assert commitPointsRecycled == commitPointsRemoved; + assert commitPointsRecycled == commitPointsRemoved : "commitPointsRecycled=" + + commitPointsRecycled + + " != commitPointsRemoved=" + + commitPointsRemoved; return totalFreed; } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 12:20:58 UTC (rev 5944) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 13:04:16 UTC (rev 5945) @@ -25,11 +25,14 @@ for (int i = 0; i < 100; i++) { + final int retentionMillis = (r.nextInt(10) * 10) + 1; + final int nreaderThreads = r.nextInt(19) + 1; - log.warn("Trial: " + i + ", nreaderThreads=" + nreaderThreads); + log.warn("Trial: " + i + ", retentionMillis=" + retentionMillis + + ", nreaderThreads=" + nreaderThreads); - domultiple_csem_transaction2(1/* retentionMillis */, + domultiple_csem_transaction2(retentionMillis, nreaderThreads, 20/* nwriters */, 400/* nreaders */); } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 12:20:58 UTC (rev 5944) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 13:04:16 UTC (rev 5945) @@ -2476,6 +2476,14 @@ */ final long latestReleasableTime = transactionService.getReleaseTime(); + if (lastCommitTime <= latestReleasableTime) { + throw new AssertionError("lastCommitTime=" + lastCommitTime + + ", latestReleasableTime=" + latestReleasableTime + + ", lastDeferredReleaseTime=" + + m_lastDeferredReleaseTime + ", activeTxCount=" + + m_activeTxCount); + } + // Note: This is longer true. Delete blocks are attached to the // commit point in which the deletes were made. // /* @@ -3689,9 +3697,9 @@ } /* - * FIXME Restore this code when addressing + * * - * https://sourceforge.net/apps/trac/bigdata/ticket/440 + * @see https://sourceforge.net/apps/trac/bigdata/ticket/440 */ // // Now remove the commit record entries from the commit record index. @@ -3705,8 +3713,11 @@ + ", commitPointsRemoved=" + commitPointsRemoved ); -// assert commitPointsRecycled == commitPointsRemoved; - + assert commitPointsRecycled == commitPointsRemoved : "commitPointsRecycled=" + + commitPointsRecycled + + " != commitPointsRemoved=" + + commitPointsRemoved; + return totalFreed; } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 12:20:58 UTC (rev 5944) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 13:04:16 UTC (rev 5945) @@ -25,9 +25,12 @@ for (int i = 0; i < 100; i++) { + final int retentionMillis = (r.nextInt(10) * 10) + 1; + final int nreaderThreads = r.nextInt(19) + 1; - log.warn("Trial: " + i + ", nreaderThreads=" + nreaderThreads); + log.warn("Trial: " + i + ", retentionMillis=" + retentionMillis + + ", nreaderThreads=" + nreaderThreads); domultiple_csem_transaction2(1/* retentionMillis */, nreaderThreads, 20/* nwriters */, 400/* nreaders */); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 13:28:47
|
Revision: 5946 http://bigdata.svn.sourceforge.net/bigdata/?rev=5946&view=rev Author: thompsonbry Date: 2012-02-02 13:28:40 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Made assert unconditional (-ea not required). Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 13:04:16 UTC (rev 5945) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 13:28:40 UTC (rev 5946) @@ -3713,10 +3713,10 @@ + ", commitPointsRemoved=" + commitPointsRemoved ); - assert commitPointsRecycled == commitPointsRemoved : "commitPointsRecycled=" - + commitPointsRecycled - + " != commitPointsRemoved=" - + commitPointsRemoved; + if (commitPointsRecycled != commitPointsRemoved) + throw new AssertionError("commitPointsRecycled=" + + commitPointsRecycled + " != commitPointsRemoved=" + + commitPointsRemoved); return totalFreed; } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 13:04:16 UTC (rev 5945) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2012-02-02 13:28:40 UTC (rev 5946) @@ -3713,10 +3713,10 @@ + ", commitPointsRemoved=" + commitPointsRemoved ); - assert commitPointsRecycled == commitPointsRemoved : "commitPointsRecycled=" - + commitPointsRecycled - + " != commitPointsRemoved=" - + commitPointsRemoved; + if (commitPointsRecycled != commitPointsRemoved) + throw new AssertionError("commitPointsRecycled=" + + commitPointsRecycled + " != commitPointsRemoved=" + + commitPointsRemoved); return totalFreed; } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 13:35:58
|
Revision: 5947 http://bigdata.svn.sourceforge.net/bigdata/?rev=5947&view=rev Author: thompsonbry Date: 2012-02-02 13:35:47 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Raised log level to WARN when there is no commit record index for the given timestamp. This is on the code path that is being taken for [1]. [1] https://sourceforge.net/apps/trac/bigdata/ticket/467 (IllegalStateException trying to access lexicon index using RWStore with recycling) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-02 13:28:40 UTC (rev 5946) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-02 13:35:47 UTC (rev 5947) @@ -3417,8 +3417,8 @@ if (commitRecord == null) { - if (log.isInfoEnabled()) - log.info("No commit record for timestamp=" + commitTime); +// if (log.isInfoEnabled()) + log.warn("No commit record for timestamp=" + commitTime); return null; Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-02 13:28:40 UTC (rev 5946) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2012-02-02 13:35:47 UTC (rev 5947) @@ -3417,8 +3417,8 @@ if (commitRecord == null) { - if (log.isInfoEnabled()) - log.info("No commit record for timestamp=" + commitTime); +// if (log.isInfoEnabled()) + log.warn("No commit record for timestamp=" + commitTime); return null; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 20:16:14
|
Revision: 5948 http://bigdata.svn.sourceforge.net/bigdata/?rev=5948&view=rev Author: thompsonbry Date: 2012-02-02 20:16:05 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Cloned CommitTimeIndex to create TxId2CommitTimeIndex and then modified the latter to provide a map from the txId to the commit time on which the transaction is reading. (CommitTimeIndex is also used by the DistributedTransactionService for a different purpose, which is why I cloned the class.) Modified getCounters() for AbstractTransactionService and DistributedTransactionService to (a) declare the ICounterSetAccess interface; (b) NOT use the synchronized keyword and instead return a new CounterSet instance for each request; and (c) to not use methods which require synchronized access to either the startTimeIndex (AbstractTransactionService) or the commitTime index (DistributedTransactionService). Modified AbstractTransactionService#getStartTime() to return 0L rather than -1 for readCommitTime when there are no commit points on the Journal. Updated TestTransactionService to correct two tests which were predicting the behavior demonsrated by the bug. I extended the test_newTx_readOnly_historyGone() significantly and also create a second variant of that test which covers the case when there are no more active transactions after a commit/abort. The transaction service and RWJournal test suites are green. @see https://sourceforge.net/apps/trac/bigdata/ticket/467 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/TxId2CommitTimeIndex.java branches/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/service/TxId2CommitTimeIndex.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java 2012-02-02 13:35:47 UTC (rev 5947) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/CommitRecordIndex.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -503,6 +503,11 @@ this.addr = addr; } + + public String toString() { + return super.toString() + "{commitTime=" + commitTime + ",addr=" + + addr + "}"; + } /** * Used to (de-)serialize {@link Entry}s (NOT thread-safe). Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-02-02 13:35:47 UTC (rev 5947) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -355,7 +355,7 @@ if (bufferStrategy instanceof RWStrategy) { if (txLog.isInfoEnabled()) txLog.info("OPEN : txId=" + state.tx - + ", readsOnCommitTime=" + state.readCommitTime); + + ", readsOnCommitTime=" + state.readsOnCommitTime); final RawTx tx = ((RWStrategy)bufferStrategy).getRWStore().newTx(); if (m_rawTxs.put(state.tx, tx) != null) { throw new IllegalStateException( @@ -368,7 +368,7 @@ protected void deactivateTx(final TxState state) { if (txLog.isInfoEnabled()) txLog.info("CLOSE: txId=" + state.tx - + ", readsOnCommitTime=" + state.readCommitTime); + + ", readsOnCommitTime=" + state.readsOnCommitTime); /* * Note: We need to deactivate the tx before RawTx.close() is * invoked otherwise the activeTxCount will never be zero inside Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java 2012-02-02 13:35:47 UTC (rev 5947) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -45,6 +45,7 @@ import com.bigdata.config.LongValidator; import com.bigdata.counters.CounterSet; +import com.bigdata.counters.ICounterSetAccess; import com.bigdata.counters.Instrument; import com.bigdata.journal.ITransactionService; import com.bigdata.journal.ITx; @@ -72,7 +73,7 @@ * timestamp generated by the last master service. */ abstract public class AbstractTransactionService extends AbstractService - implements ITransactionService, IServiceShutdown { + implements ITransactionService, IServiceShutdown, ICounterSetAccess { /** * Logger. @@ -633,7 +634,7 @@ * {@link #nextTimestamp()}. * <p> * Note: The transaction service will refuse to start new transactions whose - * timestamps are LTE to {@link #getEarliestTxStartTime()}. + * timestamps are LTE to {@link #earliestOpenTxId}. * * @throws RuntimeException * Wrapping {@link TimeoutException} if a timeout occurs @@ -731,7 +732,7 @@ /** #of transactions aborted. */ private long abortCount = 0L; - /** #of transactions committed. */ + /** #of transactions committed (does not count bare commits). */ private long commitCount = 0L; /** #of active read-write transactions. */ @@ -773,15 +774,30 @@ } +// /** +// * The minimum over the absolute values of the active transactions. +// * <p> +// * Note: This is a transaction identifier. It is NOT the commitTime on which +// * that transaction is reading. +// * +// * @see https://sourceforge.net/apps/trac/bigdata/ticket/467 +// */ +// public long getEarliestTxStartTime() { +// +// return earliestTxStartTime; +// +// } + /** - * Return the minimum over the absolute values of the active transactions. + * The minimum over the absolute values of the active transactions and ZERO + * (0) if there are no open transactions. + * <p> + * Note: This is a transaction identifier. It is NOT the commitTime on which + * that transaction is reading. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/467 */ - public long getEarliestTxStartTime() { - - return earliestTxStartTime; - - } - private volatile long earliestTxStartTime = 0L; + private volatile long earliestOpenTxId = 0L; /** * @see Options#MIN_RELEASE_AGE @@ -861,7 +877,7 @@ * otherwise this will throw out an exception. */ - startTimeIndex.add(Math.abs(state.tx)); + startTimeIndex.add(Math.abs(state.tx), state.readsOnCommitTime); } @@ -877,6 +893,13 @@ } + if (log.isInfoEnabled()) + log.info(state.toString() + ", startCount=" + startCount + + ", abortCount=" + abortCount + ", commitCount=" + + commitCount + ", readOnlyActiveCount=" + + readOnlyActiveCount + ", readWriteActiveCount=" + + readWriteActiveCount); + } finally { state.lock.unlock(); @@ -886,6 +909,28 @@ } /** + * Return the commit time on which the transaction is reading. + * <p> + * Note: This method is exposed primarily for the unit tests. + * + * @param txId + * The transaction identifier. + * @return The commit time on which that transaction is reading. + * @throws IllegalArgumentException + * if there is no such transaction. + */ + protected long getReadsOnTime(final long txId) { + + final TxState state = activeTx.get(txId); + + if(state == null) + throw new IllegalArgumentException(); + + return state.readsOnCommitTime; + + } + + /** * Removes the transaction from the local tables. * <p> * Note: The caller MUST own {@link TxState#lock} across this method and @@ -947,8 +992,12 @@ } - if (log.isInfoEnabled()) - log.info(state.toString()); + if (log.isInfoEnabled()) + log.info(state.toString() + ", startCount=" + startCount + + ", abortCount=" + abortCount + ", commitCount=" + + commitCount + ", readOnlyActiveCount=" + + readOnlyActiveCount + ", readWriteActiveCount=" + + readWriteActiveCount); // } finally { // @@ -964,11 +1013,12 @@ * deactivated. The method will remove the transaction entry in the ordered * set of running transactions ({@link #startTimeIndex}). If the specified * timestamp corresponds to the earliest running transaction, then the - * <code>releaseTime</code> will be updated and the new releaseTime will - * be set using {@link #setReleaseTime(long)}. + * <code>releaseTime</code> will be updated and the new releaseTime will be + * set using {@link #setReleaseTime(long)}. * <p> - * Note that the {@link #startTimeIndex} contains the absolute value of the - * transaction identifiers! + * Note that the {@link #startTimeIndex} keys are the absolute value of the + * transaction identifiers! The values are the commit times on which the + * corresponding transaction is reading. * * @param timestamp * The absolute value of a transaction identifier that has just @@ -977,6 +1027,8 @@ * @todo the {@link #startTimeIndex} could be used by * {@link #findUnusedTimestamp(long, long)} so that it could further * constrain its search within the half-open interval. + * + @see https://sourceforge.net/apps/trac/bigdata/ticket/467 */ final protected void updateReleaseTime(final long timestamp) { @@ -1014,6 +1066,11 @@ * [now] if there are no more running transactions. */ final long earliestTxStartTime; + /* + * The commit time on which the earliest remaining tx is reading and + * [now] if there are no more running transactions. + */ + final long earliestTxReadsOnCommitTime; synchronized (startTimeIndex) { @@ -1039,21 +1096,37 @@ /* * The start time associated with the earliest remaining tx. */ - earliestTxStartTime = startTimeIndex.decodeKey(startTimeIndex - .keyAt(0L)); + final byte[] key = startTimeIndex.keyAt(0L); + earliestTxStartTime = startTimeIndex.decodeKey(key); + + /* + * The commit point on which that tx is reading. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/467 + */ + + final byte[] val = startTimeIndex.valueAt(0L); + + earliestTxReadsOnCommitTime = startTimeIndex.decodeVal(val); + + // The earliest open transaction identifier. + this.earliestOpenTxId = earliestTxStartTime; + } else { /* * There are no commit points and there are no active * transactions. */ - earliestTxStartTime = now; + earliestTxStartTime = earliestTxReadsOnCommitTime = now; + + // There are no open transactions. + this.earliestOpenTxId = 0L; + } - this.earliestTxStartTime = earliestTxStartTime; - } // synchronized(startTimeIndex) if (minReleaseAge == Long.MAX_VALUE) { @@ -1085,9 +1158,14 @@ * minReleaseAge is very large, e.g., Long#MAX_VALUE. This is caught * above for that specific value, but other very large values could * also cause problems. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/467 */ - final long releaseTime = Math.min(lastCommitTime - 1, Math.min( - earliestTxStartTime - 1, now - minReleaseAge)); + final long releaseTime = Math.min( + lastCommitTime - 1, + Math.min(earliestTxReadsOnCommitTime - 1, now + - minReleaseAge)); +// earliestTxStartTime - 1, now - minReleaseAge)); /* * We only update the release time if the computed time would @@ -1212,8 +1290,9 @@ } /** - * A transient index holding the <strong>absolute value</strong> of the - * start times of all active transactions. + * A transient index whose keys are the <strong>absolute value</strong> of + * the start times of all active transactions. The values are the commit + * times on which the corresponding transaction is reading. * <p> * Note: The absolute value constraint is imposed so that we can directly * identify the earliest active transaction in the index by its position (it @@ -1224,7 +1303,7 @@ * will not return a timestamp whose absolute value corresponds to an active * transaction. */ - protected final CommitTimeIndex startTimeIndex = CommitTimeIndex + protected final TxId2CommitTimeIndex startTimeIndex = TxId2CommitTimeIndex .createTransient(); /** @@ -1371,8 +1450,8 @@ */ final long commitTime = findCommitTime(timestamp); - // The transaction will read from this commit point (-1 iff no commits yet). - readCommitTime.set(commitTime); + // The transaction will read from this commit point. + readCommitTime.set(commitTime == -1 ? 0 : commitTime); if (commitTime == -1L) { @@ -1894,11 +1973,11 @@ /** * The commit time associated with the commit point against which this - * transaction will read. This will be <code>-1</code> IFF there are no + * transaction will read. This will be <code>0</code> IFF there are no * commit points yet. Otherwise it is a real commit time associated with * some existing commit point. */ - public final long readCommitTime; + public final long readsOnCommitTime; /** * <code>true</code> iff the transaction is read-only. @@ -2102,7 +2181,8 @@ * The assigned transaction identifier. * @param readCommitTime * The commit time associated with the commit point against - * which this transaction will read. + * which this transaction will read (may be ZERO if there are + * no commit points, must not be negative). */ protected TxState(final long tx, final long readCommitTime) { @@ -2111,10 +2191,13 @@ if (tx == ITx.READ_COMMITTED) throw new IllegalArgumentException(); - + + if (readCommitTime < 0) + throw new IllegalArgumentException(); + this.tx = tx; - this.readCommitTime = readCommitTime; + this.readsOnCommitTime = readCommitTime; this.readOnly = TimestampUtility.isReadOnly(tx); @@ -2263,7 +2346,8 @@ // return Long.toString(tx); - return "GlobalTxState{tx=" + tx + ",readOnly=" + readOnly + return "GlobalTxState{tx=" + tx + ",readsOnCommitTime=" + + readsOnCommitTime + ",readOnly=" + readOnly + ",runState=" + runState + "}"; } @@ -2387,14 +2471,13 @@ /** * Return the {@link CounterSet}. - * - * @todo define interface declaring the counters reported here. */ - synchronized public CounterSet getCounters() { +// synchronized + public CounterSet getCounters() { - if (countersRoot == null) { +// if (countersRoot == null) { - countersRoot = new CounterSet(); + final CounterSet countersRoot = new CounterSet(); countersRoot.addCounter("runState", new Instrument<String>() { protected void sample() { @@ -2459,26 +2542,30 @@ /* * Reports the earliest transaction identifier -or- ZERO (0L) if * there are no active transactions. + * + * Note: This is a txId. It is NOT the commitTime on which that tx + * is reading. */ countersRoot.addCounter("earliestTx", new Instrument<Long>() { protected void sample() { - synchronized (startTimeIndex) { - if (startTimeIndex.getEntryCount() == 0) { - // i.e., nothing running. - setValue(0L); - } - final long tx = startTimeIndex.find(1L); - setValue(tx); - } + setValue(earliestOpenTxId); +// synchronized (startTimeIndex) { +// if (startTimeIndex.getEntryCount() == 0) { +// // i.e., nothing running. +// setValue(0L); +// } +// final long tx = startTimeIndex.find(1L); +// setValue(tx); +// } } }); - } +// } return countersRoot; } - protected CounterSet countersRoot; +// protected CounterSet countersRoot; } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java 2012-02-02 13:35:47 UTC (rev 5947) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/DistributedTransactionService.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -82,7 +82,6 @@ * Options understood by this service. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public interface Options extends AbstractTransactionService.Options { @@ -370,7 +369,6 @@ * the approach is less rigorous. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ private class SnapshotTask implements Runnable { @@ -455,7 +453,6 @@ * a {@link ByteBuffer} or using {@link Adler32}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ public static class SnapshotHelper { @@ -813,7 +810,6 @@ * Task runs {@link ITxCommitProtocol#abort(long)}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ private static class AbortTask implements Callable<Void> { @@ -1120,7 +1116,6 @@ * </p> * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ private class DistributedTxCommitTask implements Callable<Long> { @@ -1436,7 +1431,6 @@ * * @author <a href="mailto:tho...@us...">Bryan * Thompson</a> - * @version $Id$ */ private class TaskRunner implements Callable<Void> { @@ -1644,7 +1638,6 @@ * * @author <a href="mailto:tho...@us...">Bryan * Thompson</a> - * @version $Id$ */ protected class PrepareTask implements Callable<Void> { @@ -1928,7 +1921,6 @@ * {@link IDataService}. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ private static class SetReleaseTimeTask implements Callable<Void> { @@ -1980,7 +1972,6 @@ * resources it can release. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * @version $Id$ */ protected class NotifyReleaseTimeTask implements Runnable { @@ -2066,14 +2057,15 @@ /** * Adds counters for the {@link LockManager}. */ - synchronized public CounterSet getCounters() { +// synchronized + public CounterSet getCounters() { - if (countersRoot == null) { +// if (countersRoot == null) { /* * Setup basic counters. */ - super.getCounters(); + final CounterSet countersRoot = super.getCounters(); /** * The lock manager imposing a partial ordering on the prepare phase @@ -2121,9 +2113,15 @@ countersRoot.addCounter("commitTimesCount", new Instrument<Long>() { protected void sample() { - synchronized (commitTimeIndex) { - setValue(commitTimeIndex.getEntryCount()); - } + /* + * Note: This uses a method which does not require + * synchronization. (The entryCount is reported + * without traversing the BTree.) + */ + setValue(commitTimeIndex.getEntryCount()); +// synchronized (commitTimeIndex) { +// setValue(commitTimeIndex.getEntryCount()); +// } } }); @@ -2133,7 +2131,7 @@ } }); - } +// } return countersRoot; Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/TxId2CommitTimeIndex.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/TxId2CommitTimeIndex.java (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/TxId2CommitTimeIndex.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -0,0 +1,357 @@ +package com.bigdata.service; + +import java.io.IOException; +import java.io.ObjectInput; +import java.io.ObjectOutput; +import java.util.UUID; + +import com.bigdata.btree.BTree; +import com.bigdata.btree.Checkpoint; +import com.bigdata.btree.DefaultTupleSerializer; +import com.bigdata.btree.ITuple; +import com.bigdata.btree.IndexMetadata; +import com.bigdata.btree.keys.ASCIIKeyBuilderFactory; +import com.bigdata.btree.keys.IKeyBuilder; +import com.bigdata.btree.keys.IKeyBuilderFactory; +import com.bigdata.btree.keys.KeyBuilder; +import com.bigdata.rawstore.Bytes; +import com.bigdata.rawstore.IRawStore; + +/** + * {@link BTree} whose keys are commit times. No values are stored in the + * {@link BTree}. + * + * @todo Subclass {@link BTree} for long keys and arbitrary values and move the + * find() and findNext() methods onto that class and make the value type + * generic. That same logic is replicated right now in several places and + * there is no reason for that. Allow 0L for {@link #find(long)}, but + * check all callers first to see who might use that for error checking + * and then modify callers using 1L to use 0L. In fact, + * {@link #find(long)} should probably accept the value to be returned in + * case there is no LTE entry (that is, in case the index is empty). + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ +public class TxId2CommitTimeIndex extends BTree { + + /** + * Instance used to encode the timestamp into the key. + */ + final private IKeyBuilder keyBuilder = new KeyBuilder(Bytes.SIZEOF_LONG); + + /** + * Create a transient instance. + * + * @return The new instance. + */ + static public TxId2CommitTimeIndex createTransient() { + + final IndexMetadata metadata = new IndexMetadata(UUID.randomUUID()); + + metadata.setBTreeClassName(TxId2CommitTimeIndex.class.getName()); + + metadata.setTupleSerializer(new TupleSerializer( + new ASCIIKeyBuilderFactory(Bytes.SIZEOF_LONG))); + + return (TxId2CommitTimeIndex) BTree.createTransient(/* store, */metadata); + + } + + /** + * Load from the store. + * + * @param store + * The backing store. + * @param checkpoint + * The {@link Checkpoint} record. + * @param metadata + * The metadata record for the index. + */ + public TxId2CommitTimeIndex(final IRawStore store, final Checkpoint checkpoint, + final IndexMetadata metadata, boolean readOnly) { + + super(store, checkpoint, metadata, readOnly); + + } + + /** + * Encodes the commit time into a key. + * + * @param commitTime + * The commit time. + * + * @return The corresponding key. + */ + protected byte[] encodeKey(final long commitTime) { + + return keyBuilder.reset().append(commitTime).getKey(); + + } + + protected long decodeKey(final byte[] key) { + + return KeyBuilder.decodeLong(key, 0); + + } + + protected long decodeVal(final byte[] val) { + + return KeyBuilder.decodeLong(val, 0); + + } + + /** + * Return the largest key that is less than or equal to the given timestamp. + * This is used primarily to locate the commit point that will serve as the + * ground state for a transaction having <i>timestamp</i> as its start time. + * In this context the LTE search identifies the most recent commit point + * that not later than the start time of the transaction. + * + * @param timestamp + * The given timestamp. + * + * @return The timestamp -or- <code>-1L</code> iff there is no entry in the + * index which satisifies the probe. + * + * @throws IllegalArgumentException + * if <i>timestamp</i> is less than or equals to ZERO (0L). + */ + synchronized public long find(final long timestamp) { + + if (timestamp <= 0L) + throw new IllegalArgumentException(); + + // find (first less than or equal to). + final long index = findIndexOf(timestamp); + + if(index == -1) { + + // No match. + + return -1L; + + } + + return decodeKey(keyAt(index)); + + } + + /** + * Find the first commit time strictly greater than the timestamp. + * + * @param timestamp + * The timestamp. A value of ZERO (0) may be used to find the + * first commit time. + * + * @return The commit time -or- <code>-1L</code> if there is no commit + * record whose timestamp is strictly greater than <i>timestamp</i>. + */ + synchronized public long findNext(final long timestamp) { + + /* + * Note: can also be written using rangeIterator().next(). + */ + + if (timestamp < 0L) + throw new IllegalArgumentException(); + + // find first strictly greater than. + final long index = findIndexOf(Math.abs(timestamp)) + 1; + + if (index == nentries) { + + // No match. + + return -1L; + + } + + return decodeKey(keyAt(index)); + + } + + /** + * Find the index having the largest timestamp that is less than or + * equal to the given timestamp. + * + * @return The index having the largest timestamp that is less than or + * equal to the given timestamp -or- <code>-1</code> iff there + * are no index entries. + */ + synchronized public long findIndexOf(final long timestamp) { + + long pos = super.indexOf(encodeKey(timestamp)); + + if (pos < 0) { + + /* + * the key lies between the entries in the index, or possible before + * the first entry in the index. [pos] represents the insert + * position. we convert it to an entry index and subtract one to get + * the index of the first commit record less than the given + * timestamp. + */ + + pos = -(pos+1); + + if (pos == 0) { + + // No entry is less than or equal to this timestamp. + return -1; + + } + + pos--; + + return pos; + + } else { + + /* + * exact hit on an entry. + */ + + return pos; + + } + + } + + /** + * Add an entry for the commitTime. + * + * @param txId + * A timestamp representing a transaction identifier (normally + * the absolute value of the transaction identifier per + * {@link AbstracTransactionService}). + * @param txId + * A timestamp representing a commit time. + * + * @exception IllegalArgumentException + * if <i>commitTime</i> is <code>0L</code>. + */ +// * @exception IllegalArgumentException +// * if there is already an entry registered under for the +// * given timestamp. + public void add(final long txId, final long commitTime) { + + if (txId == 0L) + throw new IllegalArgumentException(); + + if (commitTime < 0L) + throw new IllegalArgumentException(); + + final byte[] key = encodeKey(txId); + + if(super.contains(key)) { + + throw new IllegalArgumentException("entry exists: timestamp=" + + commitTime); + + } + + final byte[] val = encodeKey(commitTime); + + super.insert(key, val); + + } + + /** + * Encapsulates key and value formation. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ + static protected class TupleSerializer extends + DefaultTupleSerializer<Long, Long> { + + /** + * + */ + private static final long serialVersionUID = -2851852959439807542L; + + /** + * De-serialization ctor. + */ + public TupleSerializer() { + + super(); + + } + + /** + * Ctor when creating a new instance. + * + * @param keyBuilderFactory + */ + public TupleSerializer(final IKeyBuilderFactory keyBuilderFactory) { + + super(keyBuilderFactory); + + } + + /** + * Decodes the key as a transaction identifier. + */ + @Override + public Long deserializeKey(final ITuple tuple) { + + final byte[] key = tuple.getKeyBuffer().array(); + + final long id = KeyBuilder.decodeLong(key, 0); + + return id; + + } + + /** + * Decodes the value as a commit time. + */ + public Long deserialize(final ITuple tuple) { + + final byte[] val = tuple.getValueBuffer().array(); + + final long id = KeyBuilder.decodeLong(val, 0); + + return id; + + } + + /** + * The initial version (no additional persistent state). + */ + private final static transient byte VERSION0 = 0; + + /** + * The current version. + */ + private final static transient byte VERSION = VERSION0; + + public void readExternal(final ObjectInput in) throws IOException, + ClassNotFoundException { + + super.readExternal(in); + + final byte version = in.readByte(); + + switch (version) { + case VERSION0: + break; + default: + throw new UnsupportedOperationException("Unknown version: " + + version); + } + + } + + public void writeExternal(final ObjectOutput out) throws IOException { + + super.writeExternal(out); + + out.writeByte(VERSION); + + } + + } + +} Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/TxId2CommitTimeIndex.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java 2012-02-02 13:35:47 UTC (rev 5947) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestTransactionService.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -96,9 +96,16 @@ return this; } + + @Override + protected long getReadsOnTime(final long txId) { + return super.getReadsOnTime(txId); + + } + @Override - public AbstractFederation getFederation() { + public AbstractFederation<?> getFederation() { return null; } @@ -993,16 +1000,14 @@ } /** - * Verify that a request for an historical state that is no longer available - * will be rejected. + * Unit test verifies that the release time does NOT advance when the + * earliest running transaction terminates but a second transaction is still + * active which reads on the same commit time. * - * @todo verify illegal to advance the release time while there are active - * tx LTE that the specified release time (or if legal, then force the - * abort of those tx and verify that). - * - * @throws IOException + * @see https://sourceforge.net/apps/trac/bigdata/ticket/467 */ - public void test_newTx_readOnly_historyGone() throws IOException { + public void test_newTx_readOnly_releaseTimeRespectsReadsOnCommitTime() + throws IOException { final MockTransactionService service = newFixture(); @@ -1021,43 +1026,388 @@ // this will be the earliest running tx until it completes. final long tx0 = service.newTx(ITx.UNISOLATED); - // timestamp GT [tx0] and LT [tx1]. + // timestamp GT [abs(tx0)] and LT [abs(tx1)]. final long ts = service.nextTimestamp(); + assertTrue(ts > Math.abs(tx0)); + // this will become the earliest running tx if tx0 completes first. final long tx1 = service.newTx(ITx.UNISOLATED); - - // commit the tx0 - this will update the release time. + + assertTrue(ts < Math.abs(tx1)); + + // commit tx0 // final long commitTime0 = service.commit(tx0); final long newReleaseTime = service.getReleaseTime(); + /* + * Verify release time was NOT updated since both transactions are + * reading from the same commit time. + */ + assertEquals(oldReleaseTime, newReleaseTime); + + /* + * Try to read from [ts]. This should succeed since the release time + * was not advanced. + */ + service.newTx(ts); + + } finally { + + service.destroy(); + + } + + } + + /** + * Verify that a request for an historical state that is no longer available + * will be rejected. + * <p> + * The test is setup as follows: + * + * <pre> + * +------tx2--------------- + * +-----------------tx1---------------+ + * +-----------tx0---------+ + * +=====================================================+ + * +======================== + * 0-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ + * tx0 | tx1 | | | tx2 | | + * (0) ts1 (0) ts2 | ts3 (ct0) ts4 | + * ct0 ct1 + * rt=0 rt=ct0-1 + * </pre> + * + * where tx0, ... are transactions.<br/> + * where ts0, ... are timestamps.<br/> + * where ct0, ... are the commit times for the corresponding tx#.<br/> + * where (...) indicates the commit time on which the transaction is + * reading.<br/> + * where rt# is a releaseTime. The releaseTime and lastCommitTime are + * initially ZERO (0).<br/> + * The long "====" lines above the timeline represent the period during + * which a given commit point is available to a new transaction.<br/> + * The long "---" lines above the timeline represent the life cycle of a + * specific transaction.<br/> + * <p> + * Any transaction which starts before ct1 will see the history going back + * to commitTime=0. This commit time is initially available because there is + * no committed data. It is pinned by tx0 and then by tx1. Once both of + * those transactions complete, that commit time is released (minReleaseAge + * is zero). + * <p> + * When tx1 completes, the release time is advanced (rt=ct0-1). + * <p> + * Any transaction which starts after tx2 will see history back to ct0. + */ + public void test_newTx_readOnly_historyGone() throws IOException { + + final MockTransactionService service = newFixture(); + + try { + + /* + * Verify that the service is not retaining history beyond the last + * commit point. + */ + assertEquals(0L, service.getMinReleaseAge()); + + final long oldReleaseTime = service.getReleaseTime(); + + assertEquals(0L, oldReleaseTime); + + // this will be the earliest running tx until it completes. + final long tx0 = service.newTx(ITx.UNISOLATED); + + assertEquals(0L, service.getReadsOnTime(tx0)); + + // timestamp GT [abs(tx0)] and LT [abs(tx1)]. + final long ts1 = service.nextTimestamp(); + + assertTrue(ts1 > Math.abs(tx0)); + + // this will become the earliest running tx if tx0 completes first. + final long tx1 = service.newTx(ITx.UNISOLATED); + + assertEquals(0L, service.getReadsOnTime(tx1)); + + // timestamp GT [abs(tx1)] and LT [abs(tx2)]. + final long ts2 = service.nextTimestamp(); + + assertTrue(ts1 < Math.abs(tx1)); + + assertTrue(ts2 > Math.abs(tx1)); + + // commit tx0. + final long commitTimeTx0 = service.commit(tx0); + + assertTrue(commitTimeTx0 > ts2); + + final long newReleaseTime = service.getReleaseTime(); + + // verify release time was NOT updated. + assertEquals(oldReleaseTime, newReleaseTime); + + // After tx0 commits. + final long ts3 = service.nextTimestamp(); + + assertTrue(ts3 > commitTimeTx0); + + /* + * Start another transaction. This should read from the commitTime + * for tx0. + * + * TODO Do an alternative test where we do not obtain tx2. How does + * that play out. + */ + final long tx2 = service.newTx(ITx.UNISOLATED); + + // After tx2 starts + final long ts4 = service.nextTimestamp(); + + assertTrue(ts2 < Math.abs(tx2)); + assertTrue(ts4 > Math.abs(tx2)); + + /* + * Commit tx1. The releaseTime SHOULD be updated since tx1 was the + * earliest running transaction and no remaining transaction reads + * from the same commit time as tx1. + */ + final long commitTimeTx1 = service.commit(tx1); + + assertTrue(commitTimeTx1 > commitTimeTx0); + assertTrue(commitTimeTx1 > ts3); + assertTrue(commitTimeTx1 > tx2); + + final long newReleaseTime2 = service.getReleaseTime(); + // verify release time was updated. - if (oldReleaseTime == newReleaseTime) { + assertNotSame(oldReleaseTime, newReleaseTime2); + + /* + * Should have advanced the release time right up to (but LT) the + * commit time on which tx2 is reading, which is the commitTime for + * tx0. + * + * Note: This assumes [minReleaseAge==0]. + */ + assertEquals(Math.abs(commitTimeTx0) - 1, newReleaseTime2); - fail("releaseTime not updated: releaseTime=" + newReleaseTime); - + try { + /* + * Try to read from [ts1]. This timestamp was obtain after tx0 + * and before tx1. Since [minReleaseAge==0], the history for + * this timestamp was released after both tx0 and tx1 were done. + * Therefore, we should not be able to obtain a transaction for + * this timestamp. + */ + service.newTx(ts1); + fail("Expecting: " + IllegalStateException.class); + } catch (IllegalStateException ex) { + log.info("Ignoring expected exception: " + ex); } + + try { + /* + * Try to read from [ts2]. This timestamp was obtain after tx1 + * and before tx1 was committed. Since [minReleaseAge==0], the + * history for this timestamp was released after both tx0 and + * tx1 were done. Therefore, we should not be able to obtain a + * transaction for this timestamp. + */ + service.newTx(ts2); + fail("Expecting: " + IllegalStateException.class); + } catch (IllegalStateException ex) { + log.info("Ignoring expected exception: " + ex); + } /* + * This will read on the commit point pinned by tx1, which is + * [commitTimeTx0]. + */ + final long tx3 = service.newTx(ts3); + + assertEquals(commitTimeTx0, service.getReadsOnTime(tx3)); + + } finally { + + service.destroy(); + + } + + } + + /** + * This is a variant on {@link #test_newTx_readOnly_historyGone()} where we + * do not start tx2. In this case, when we end tx1 the release time will + * advance right up to the most recent commit time. + * <p> + * The test is setup as follows: + * + * <pre> + * +-----------------tx1---------------+ + * +-----------tx0---------+ + * +=====================================================+ + * 0-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ + * tx0 | tx1 | | | | | + * (0) ts1 (0) ts2 | ts3 | + * ct0 ct1 + * rt=0 rt=now-1 + * </pre> + * + * where tx0, ... are transactions.<br/> + * where ts0, ... are timestamps.<br/> + * where ct0, ... are the commit times for the corresponding tx#.<br/> + * where (...) indicates the commit time on which the transaction is + * reading.<br/> + * where rt# is a releaseTime. The releaseTime and lastCommitTime are + * initially ZERO (0).<br/> + * The long "====" lines above the timeline represent the period during + * which a given commit point is available to a new transaction.<br/> + * The long "---" lines above the timeline represent the life cycle of a + * specific transaction.<br/> + * <p> + * Any transaction which starts before ct1 will see the history going back + * to commitTime=0. This commit time is initially available because there is + * no committed data. It is pinned by tx0 and then by tx1. Once both of + * those transactions complete, that commit time is released (minReleaseAge + * is zero). + * <p> + * When tx1 completes, the release time is advanced (rt=now-1). + * <p> + * Any transaction which starts after ct1 will see read on ct1. + */ + public void test_newTx_readOnly_historyGone2() throws IOException { + + final MockTransactionService service = newFixture(); + + try { + + /* + * Verify that the service is not retaining history beyond the last + * commit point. + */ + assertEquals(0L, service.getMinReleaseAge()); + + final long oldReleaseTime = service.getReleaseTime(); + + assertEquals(0L, oldReleaseTime); + + // this will be the earliest running tx until it completes. + final long tx0 = service.newTx(ITx.UNISOLATED); + + assertEquals(0L, service.getReadsOnTime(tx0)); + + // timestamp GT [abs(tx0)] and LT [abs(tx1)]. + final long ts1 = service.nextTimestamp(); + + assertTrue(ts1 > Math.abs(tx0)); + + // this will become the earliest running tx if tx0 completes first. + final long tx1 = service.newTx(ITx.UNISOLATED); + + assertEquals(0L, service.getReadsOnTime(tx1)); + + // timestamp GT [abs(tx1)] and LT [abs(tx2)]. + final long ts2 = service.nextTimestamp(); + + assertTrue(ts1 < Math.abs(tx1)); + + assertTrue(ts2 > Math.abs(tx1)); + + // commit tx0. + final long commitTimeTx0 = service.commit(tx0); + + assertTrue(commitTimeTx0 > ts2); + + final long newReleaseTime = service.getReleaseTime(); + + // verify release time was NOT updated. + assertEquals(oldReleaseTime, newReleaseTime); + + // After tx0 commits. + final long ts3 = service.nextTimestamp(); + + assertTrue(ts3 > commitTimeTx0); + + /* + * Commit tx1. The releaseTime SHOULD be updated since tx1 was the + * earliest running transaction and no remaining transaction reads + * from the same commit time as tx1. + */ + final long commitTimeTx1 = service.commit(tx1); + + assertTrue(commitTimeTx1 > commitTimeTx0); + assertTrue(commitTimeTx1 > ts3); + + final long ts4 = service.nextTimestamp(); + + assertTrue(ts4 > commitTimeTx1); + + final long newReleaseTime2 = service.getReleaseTime(); + + // verify release time was updated. + assertNotSame(oldReleaseTime, newReleaseTime2); + + /* * Should have advanced the release time right up to (but LT) the - * transaction which is now the earliest running tx (that will be - * tx1). (This assumes [minReleaseAge==0].) + * commitTime for tx1. */ - assertEquals(Math.abs(tx1) - 1, newReleaseTime); + assertEquals(Math.abs(commitTimeTx1) - 1, newReleaseTime2); + + try { + /* + * Try to read from [ts1]. This timestamp was obtain after tx0 + * and before tx1. Since [minReleaseAge==0], the history for + * this timestamp was released after both tx0 and tx1 were done. + * Therefore, we should not be able to obtain a transaction for + * this timestamp. + */ + service.newTx(ts1); + fail("Expecting: " + IllegalStateException.class); + } catch (IllegalStateException ex) { + log.info("Ignoring expected exception: " + ex); + } + + try { + /* + * Try to read from [ts2]. This timestamp was obtain after tx1 + * and before tx1 was committed. Since [minReleaseAge==0], the + * history for this timestamp was released after both tx0 and + * tx1 were done. Therefore, we should not be able to obtain a + * transaction for this timestamp. + */ + service.newTx(ts2); + fail("Expecting: " + IllegalStateException.class); + } catch (IllegalStateException ex) { + log.info("Ignoring expected exception: " + ex); + } try { /* - * Try to read from [ts]. Since [minReleaseAge==0] the history - * should be gone and this should fail. + * Try to read from [ts3]. This timestamp was obtain before tx1 + * committed. Since [minReleaseAge==0], the history for this + * timestamp was released after both tx0 and tx1 were done. + * Therefore, we should not be able to obtain a transaction for + * this timestamp. */ - service.newTx(ts); - fail("Expecting: "+IllegalStateException.class); - } catch(IllegalStateException ex) { - log.info("Ignoring expected exception: "+ex); + service.newTx(ts3); + fail("Expecting: " + IllegalStateException.class); + } catch (IllegalStateException ex) { + log.info("Ignoring expected exception: " + ex); } + + /* + * Start transaction. This will read on the commitTime for tx1. + */ + final long tx3 = service.newTx(ts4); + + assertEquals(commitTimeTx1, service.getReadsOnTime(tx3)); + } finally { service.destroy(); @@ -1152,7 +1502,11 @@ // * transactions. // */ // service.notifyCommit(service.nextTimestamp()); + + // original last commit time is zero. + assertEquals(0L, service.getLastCommitTime()); + // original release time is zero. assertEquals(0L, service.getReleaseTime()); final long tx0 = service.newTx(ITx.UNISOLATED); @@ -1173,16 +1527,44 @@ // terminate the 1st tx. service.abort(tx0); - // verify that releaseTime was updated. + { + /* + * Verify that releaseTime was NOT updated yet. The original + * commit time is still pinned by [tx2]. + */ + assertEquals(0L, service.getReleaseTime()); + /* + * However, the lastCommitTime SHOULD have been updated. + */ + assertTrue(service.getLastCommitTime() > 0); + } + + /* + * Finally, commit the last tx. + * + * Note: This should cause the release time to be advanced. + */ + final long ct2 = service.commit(tx2); + final long releaseTime = service.getReleaseTime(); + final long lastCommitTime = service.getLastCommitTime(); - System.err.println("tx0 =" + tx0); - System.err.println("tx1 =" + tx1); - System.err.println("tx2 =" + tx2); - System.err.println("releaseTime = " + releaseTime); - System.err.println("lastCommitTime= " + lastCommitTime); + + if (log.isInfoEnabled()) { + log.info("tx0 =" + tx0); + log.info("tx1 =" + tx1); + log.info("tx2 =" + tx2); + log.info("ct2 = " + ct2); + log.info("releaseTime = " + releaseTime); + log.info("lastCommitTime= " + lastCommitTime); + } + + // Verify release time was updated. assertNotSame(0L, releaseTime); + // Verify the expected release time. + assertEquals(ct2 - 1, releaseTime); + /* * Note: The release time MUST NOT be advanced to the last commit * time!!! @@ -1191,11 +1573,11 @@ */ assertTrue(releaseTime < lastCommitTime); - // releaseTime GTE [tx0]. - assertTrue(releaseTime >= Math.abs(tx0)); - - // releaseTime is LT [tx2]. - assertTrue(releaseTime < Math.abs(tx2)); +// // releaseTime GTE [tx0]. +// assertTrue(releaseTime >= Math.abs(tx0)); +// +// // releaseTime is LT [tx2]. +// assertTrue(releaseTime < Math.abs(tx2)); } finally { Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 13:35:47 UTC (rev 5947) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 20:16:05 UTC (rev 5948) @@ -32,99 +32,99 @@ abstract public class TestMROWTransactions extends ProxyBigdataSailTestCase { private static final Logger txLog = Logger.getLogger("com.bigdata.txLog"); - + TestMROWTransactions() { - } + } TestMROWTransactions(String arg0) { - super(arg0); - } + super(arg0); + } - void domultiple_csem_transaction_onethread(final int retentionMillis) throws Exception { - - domultiple_csem_transaction_onethread(retentionMillis, 2000, 50); - + void domultiple_csem_transaction_onethread(final int retentionMillis) throws Exception { + + domultiple_csem_transaction_onethread(retentionMillis, 2000, 50); + } - - void domultiple_csem_transaction(final int retentionMillis) throws Exception { - - domultiple_csem_transaction2(retentionMillis, 2/* nreaderThreads */, - 1000/* nwriters */, 20 * 1000/* nreaders */); - - } - - /** - * - * @param retentionMillis - * The retention time (milliseconds). - * @param nreaderThreads - * The #of threads running reader tasks. Increase nreaderThreads - * to increase chance startup condition and decrement to increase - * chance of commit point with no open read-only transaction (no - * sessions). Value is in [1:...]. - * @param nwriters - * The #of writer tasks (there is only one writer thread). - * @param nreaders - * The #of reader tasks. - * - * @throws Exception - */ - void domultiple_csem_transaction2(final int retentionMillis, - final int nreaderThreads, final int nwriters, final int nreaders) - throws Exception { - /** - * The most likely problem is related to the session protection in the - * RWStore. In development we saw problems when concurrent transactions - * had reduced the open/active transactions to zero, therefore releasing - * session protection. If the protocol works correctly we should never - * release session protection if any transaction has been initialized. - * - * The message of "invalid address" would be generated if an allocation - * has been freed and is no longer protected from recycling when an - * attempt is made to read from it. - * - * TODO Experiment with different values of [nthreads] for the with and - * w/o history variations of this test. Consider lifting that parameter - * into the signature of this method. - */ - final int nuris = 2000; // number of unique subject/objects - final int npreds = 50; // -// final PseudoRandom r = new PseudoRandom(2000); -// r.next(1500); - final Random r = new Random(); + void domultiple_csem_transaction(final int retentionMillis) throws Exception { - final CAT writes = new CAT(); - final CAT reads = new CAT(); - final AtomicReference<Throwable> failex = new AtomicReference<Throwable>(null); - // Set [true] iff there are no failures by the time we cancel the running tasks. - final AtomicBoolean success = new AtomicBoolean(false); + domultiple_csem_transaction2(retentionMillis, 2/* nreaderThreads */, + 1000/* nwriters */, 20 * 1000/* nreaders */); + + } + + /** + * + * @param retentionMillis + * The retention time (milliseconds). + * @param nreaderThreads + * The #of threads running reader tasks. Increase nreaderThreads + * to increase chance startup condition and decrement to increase + * chance of commit point with no open read-only transaction (no + * sessions). Value is in [1:...]. + * @param nwriters + * The #of writer tasks (there is only one writer thread). + * @param nreaders + * The #of reader tasks. + * + * @throws Exception + */ + void domultiple_csem_transaction2(final int retentionMillis, + final int nreaderThreads, final int nwriters, final int nreaders) + throws Exception { + + /** + * The most likely problem is related to the session protection in the + * RWStore. In development we saw... [truncated message content] |
From: <tho...@us...> - 2012-02-02 20:52:29
|
Revision: 5949 http://bigdata.svn.sourceforge.net/bigdata/?rev=5949&view=rev Author: thompsonbry Date: 2012-02-02 20:52:23 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Modified test to no longer impose a sleep after the last writer is done. Modified test to run only 20 writers (rather than 100). Both steps are necessary in order to have the test complete in a reasonable period of time. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 20:16:05 UTC (rev 5948) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 20:52:23 UTC (rev 5949) @@ -95,7 +95,7 @@ final Random r = new Random(); final CAT commits = new CAT(); - final CAT reads = new CAT(); + final CAT nreadersDone = new CAT(); final AtomicReference<Throwable> failex = new AtomicReference<Throwable>(null); // Set [true] iff there are no failures by the time we cancel the running tasks. final AtomicBoolean success = new AtomicBoolean(false); @@ -184,11 +184,14 @@ try { /* * Note: This sleep makes it much easier to hit the - * bug documented here: + * bug documented here. However, the sleep can also + * cause the test to really stretch out. So the + * sleep is only used until the writers are done. * * https://sourceforge.net/apps/trac/bigdata/ticket/467 */ - Thread.sleep(2000/* millis */); + if (commits.get() < nwriters) + Thread.sleep(2000/* millis */); txLog.info("Reading with tx: " + txId); final AbstractTripleStore readstore = (AbstractTripleStore) origStore .getIndexManager().getResourceLocator() @@ -201,7 +204,6 @@ try { while (stats.hasNext()) { stats.next(); - reads.increment(); } } finally { stats.close(); @@ -219,6 +221,7 @@ } finally { txLog.info("Aborting tx: " + txId); ((Journal) origStore.getIndexManager()).abort(txId); + nreadersDone.increment(); } } catch (Throwable ise) { if (!InnerCause.isInnerCause(ise, @@ -304,7 +307,7 @@ } if (log.isInfoEnabled()) log.info("Statements written: " + commits.get() - + ", read: " + reads.get()); + + ", read: " + nreadersDone.get()); } finally { if (writers != null) writers.shutdownNow(); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 20:16:05 UTC (rev 5948) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 20:52:23 UTC (rev 5949) @@ -33,7 +33,7 @@ + ", nreaderThreads=" + nreaderThreads); domultiple_csem_transaction2(retentionMillis, nreaderThreads, - 100/* nwriters */, 400/* nreaders */); + 20/* nwriters */, 400/* nreaders */); } Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 20:16:05 UTC (rev 5948) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 20:52:23 UTC (rev 5949) @@ -95,7 +95,7 @@ final Random r = new Random(); final CAT commits = new CAT(); - final CAT reads = new CAT(); + final CAT nreadersDone = new CAT(); final AtomicReference<Throwable> failex = new AtomicReference<Throwable>(null); // Set [true] iff there are no failures by the time we cancel the running tasks. final AtomicBoolean success = new AtomicBoolean(false); @@ -184,11 +184,14 @@ try { /* * Note: This sleep makes it much easier to hit the - * bug documented here: + * bug documented here. However, the sleep can also + * cause the test to really stretch out. So the + * sleep is only used until the writers are done. * * https://sourceforge.net/apps/trac/bigdata/ticket/467 */ - Thread.sleep(2000/* millis */); + if (commits.get() < nwriters) + Thread.sleep(2000/* millis */); txLog.info("Reading with tx: " + txId); final AbstractTripleStore readstore = (AbstractTripleStore) origStore .getIndexManager().getResourceLocator() @@ -201,7 +204,6 @@ try { while (stats.hasNext()) { stats.next(); - reads.increment(); } } finally { stats.close(); @@ -219,6 +221,7 @@ } finally { txLog.info("Aborting tx: " + txId); ((Journal) origStore.getIndexManager()).abort(txId); + nreadersDone.increment(); } } catch (Throwable ise) { if (!InnerCause.isInnerCause(ise, @@ -304,7 +307,7 @@ } if (log.isInfoEnabled()) log.info("Statements written: " + commits.get() - + ", read: " + reads.get()); + + ", read: " + nreadersDone.get()); } finally { if (writers != null) writers.shutdownNow(); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 20:16:05 UTC (rev 5948) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 20:52:23 UTC (rev 5949) @@ -33,7 +33,7 @@ + ", nreaderThreads=" + nreaderThreads); domultiple_csem_transaction2(retentionMillis, nreaderThreads, - 100/* nwriters */, 400/* nreaders */); + 20/* nwriters */, 400/* nreaders */); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 21:11:03
|
Revision: 5950 http://bigdata.svn.sourceforge.net/bigdata/?rev=5950&view=rev Author: thompsonbry Date: 2012-02-02 21:10:56 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Added release notes for 1.0.5 Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt 2012-02-02 21:10:56 UTC (rev 5950) @@ -0,0 +1,121 @@ +This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +Road map [3]: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- SPARQL 1.1 Update, Property Paths, and Federation support; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + + Note: Versions with (*) require data migration. For details, see [9]. + +1.0.5 + +- http://sourceforge.net/apps/trac/bigdata/ticket/362 (Fix incompatible with log4j - slf4j bridge.) +- http://sourceforge.net/apps/trac/bigdata/ticket/440 (BTree can not be cast to Name2Addr) +- http://sourceforge.net/apps/trac/bigdata/ticket/453 (Releasing blob DeferredFree record) +- http://sourceforge.net/apps/trac/bigdata/ticket/467 (IllegalStateException trying to access lexicon index using RWStore with recycling) + +1.0.4 + +- http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) +- http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) +- http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) + +1.0.3 + + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata, please see the following links: + +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt 2012-02-02 21:10:56 UTC (rev 5950) @@ -0,0 +1,121 @@ +This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +Road map [3]: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- SPARQL 1.1 Update, Property Paths, and Federation support; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + + Note: Versions with (*) require data migration. For details, see [9]. + +1.0.5 + +- http://sourceforge.net/apps/trac/bigdata/ticket/362 (Fix incompatible with log4j - slf4j bridge.) +- http://sourceforge.net/apps/trac/bigdata/ticket/440 (BTree can not be cast to Name2Addr) +- http://sourceforge.net/apps/trac/bigdata/ticket/453 (Releasing blob DeferredFree record) +- http://sourceforge.net/apps/trac/bigdata/ticket/467 (IllegalStateException trying to access lexicon index using RWStore with recycling) + +1.0.4 + +- http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) +- http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) +- http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) + +1.0.3 + + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata, please see the following links: + +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 21:45:26
|
Revision: 5952 http://bigdata.svn.sourceforge.net/bigdata/?rev=5952&view=rev Author: thompsonbry Date: 2012-02-02 21:45:20 +0000 (Thu, 02 Feb 2012) Log Message: ----------- log message correction to reflect new semantics for counters in the test Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 21:32:05 UTC (rev 5951) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 21:45:20 UTC (rev 5952) @@ -306,8 +306,8 @@ } } if (log.isInfoEnabled()) - log.info("Statements written: " + commits.get() - + ", read: " + nreadersDone.get()); + log.info("Writers committed: " + commits.get() + + ", readers done: " + nreadersDone.get()); } finally { if (writers != null) writers.shutdownNow(); Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 21:32:05 UTC (rev 5951) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactions.java 2012-02-02 21:45:20 UTC (rev 5952) @@ -306,8 +306,8 @@ } } if (log.isInfoEnabled()) - log.info("Statements written: " + commits.get() - + ", read: " + nreadersDone.get()); + log.info("Writers committed: " + commits.get() + + ", readers done: " + nreadersDone.get()); } finally { if (writers != null) writers.shutdownNow(); This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-02 22:27:19
|
Revision: 5953 http://bigdata.svn.sourceforge.net/bigdata/?rev=5953&view=rev Author: thompsonbry Date: 2012-02-02 22:27:13 +0000 (Thu, 02 Feb 2012) Log Message: ----------- Further reducing the #of trials for the TestMROWTransactions. It is now 10 random trials w/ and w/o history for each of the database modes. We will gain depth through multiple CI builds Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-02 21:45:20 UTC (rev 5952) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-02 22:27:13 UTC (rev 5953) @@ -79,7 +79,7 @@ final Random r = new Random(); - for (int i = 0; i < 100; i++) { + for (int i = 0; i < 10; i++) { final int nreaderThreads = r.nextInt(19) + 1; Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 21:45:20 UTC (rev 5952) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 22:27:13 UTC (rev 5953) @@ -23,7 +23,7 @@ final Random r = new Random(); - for (int i = 0; i < 100; i++) { + for (int i = 0; i < 10; i++) { final int retentionMillis = (r.nextInt(10) * 10) + 1; Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-02 21:45:20 UTC (rev 5952) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsNoHistory.java 2012-02-02 22:27:13 UTC (rev 5953) @@ -79,7 +79,7 @@ final Random r = new Random(); - for (int i = 0; i < 100; i++) { + for (int i = 0; i < 10; i++) { final int nreaderThreads = r.nextInt(19) + 1; Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 21:45:20 UTC (rev 5952) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestMROWTransactionsWithHistory.java 2012-02-02 22:27:13 UTC (rev 5953) @@ -23,7 +23,7 @@ final Random r = new Random(); - for (int i = 0; i < 100; i++) { + for (int i = 0; i < 10; i++) { final int retentionMillis = (r.nextInt(10) * 10) + 1; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2012-02-03 11:29:37
|
Revision: 5955 http://bigdata.svn.sourceforge.net/bigdata/?rev=5955&view=rev Author: thompsonbry Date: 2012-02-03 11:29:28 +0000 (Fri, 03 Feb 2012) Log Message: ----------- Fixed link to the release in SVN Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt 2012-02-03 11:25:59 UTC (rev 5954) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_5.txt 2012-02-03 11:29:28 UTC (rev 5955) @@ -12,7 +12,7 @@ You can checkout this release from: -https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_5 Feature summary: Modified: branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt =================================================================== --- branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt 2012-02-03 11:25:59 UTC (rev 5954) +++ branches/BIGDATA_RELEASE_1_1_0/bigdata/src/releases/RELEASE_1_0_5.txt 2012-02-03 11:29:28 UTC (rev 5955) @@ -12,7 +12,7 @@ You can checkout this release from: -https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_5 Feature summary: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |