From: <tho...@us...> - 2013-03-20 19:10:08
|
Revision: 7023 http://bigdata.svn.sourceforge.net/bigdata/?rev=7023&view=rev Author: thompsonbry Date: 2013-03-20 19:09:58 +0000 (Wed, 20 Mar 2013) Log Message: ----------- Changed the default #of write cache buffers for HAJournal.config from 12 to 500. This provides a substantial reduction in the IO Wait associated with the leader and allows the use with traditional disk (SATA) as well as SAS and SSD. Added a means to list out the existing snapshots and exposed that list on the "GET .../status" page. Added HAGlue method to compute the digest of a snapshot. Bug fix to computeDigest() methods. The implementations were using digest.digest() rather than digest.update(). This means that the digest was only valid for the last buffer worth of data! Bug fix to RWStore.computeDigest(). This method was computing the digest starting after the root blocks. The digest methods should compute the digest of the entire file. I have also updated the documentation to clarify this. Added ?digests option to the "GET .../status" page. This option will print the digests for the Journal, HALogs, and snapshot files. This is only a debugging tool. Using the ?digests option, I was able to determine that the RWStore and HALog digests were (still) not being computed correctly. The snapshot MD5 digests agree with those computed by the md5 utility in the OS. The RWStore and WORMStrategy compute digest methods were broken. They were failing to update the offset after each block read. These methods were also modified to use the digest.update(ByteBuffer) variant for better efficiency and the temporary byte[] was removed from the code. HALog md5 was also broken with the same root cause. In addition, the sequence counter was not being udpated and the ByteBuffer needed to be flipped after the FileChannelUtility.readAll() call. I fixed both implementations. They now provide the same results as the OS md5 utility. Fixed a bug in RESTORE where an endless loop could occur if a service had an empty HALog file lying around. During RESTORE, the HALog files are applied one by one for each commit point that is +1 over the current committed data. Since the opening and closing root blocks are the same for an empty HALog file, this was failing to advance the commit point on the journal and thus attempting to re-apply the same file again and again. No change in HA test suite failures. Modified Paths: -------------- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlue.java branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/althalog/HALogFile.java branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/halog/HALogReader.java branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/RootBlockView.java branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/WORMStrategy.java branches/READ_CACHE/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/CommitTimeIndex.java branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/DefaultRestorePolicy.java branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HARestore.java branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/SnapshotManager.java branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3SnapshotPolicy.java branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/HAStatusServletUtil.java branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java branches/READ_CACHE/src/resources/HAJournal/HAJournal.config Added Paths: ----------- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestRequest.java branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestResponse.java branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestRequest.java branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestResponse.java Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlue.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlue.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlue.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -39,6 +39,8 @@ import com.bigdata.ha.msg.IHALogDigestResponse; import com.bigdata.ha.msg.IHARootBlockRequest; import com.bigdata.ha.msg.IHARootBlockResponse; +import com.bigdata.ha.msg.IHASnapshotDigestRequest; +import com.bigdata.ha.msg.IHASnapshotDigestResponse; import com.bigdata.ha.msg.IHASnapshotRequest; import com.bigdata.ha.msg.IHASnapshotResponse; import com.bigdata.journal.AbstractJournal; @@ -152,8 +154,8 @@ RunState getRunState() throws IOException; /** - * Compute the digest of the backing store - <strong>THIS METHOD IS ONLY FOR - * DIAGNOSTIC PURPOSES.</strong> + * Compute the digest of the entire backing store - <strong>THIS METHOD IS + * ONLY FOR DIAGNOSTIC PURPOSES.</strong> * <p> * The digest is useless if there are concurrent writes since it can not be * meaningfully compared with the digest of another store unless both stores @@ -163,8 +165,8 @@ NoSuchAlgorithmException, DigestException; /** - * Compute the digest of a HALog file - <strong>THIS METHOD IS ONLY FOR - * DIAGNOSTIC PURPOSES.</strong> + * Compute the digest of the entire HALog file - <strong>THIS METHOD IS ONLY + * FOR DIAGNOSTIC PURPOSES.</strong> * <p> * The digest is useless if there are concurrent writes since it can not be * meaningfully compared with the digest of another store unless both stores @@ -177,6 +179,18 @@ NoSuchAlgorithmException, DigestException; /** + * Compute the digest of the entire snapshot file - <strong>THIS METHOD IS + * ONLY FOR DIAGNOSTIC PURPOSES.</strong> This digest is computed for the + * compressed data so it may be compared directly with the digest of the + * backing store from which the snapshot was obtained. + * + * @throws FileNotFoundException + * if no snapshot exists for that commit point. + */ + IHASnapshotDigestResponse computeHASnapshotDigest(IHASnapshotDigestRequest req) + throws IOException, NoSuchAlgorithmException, DigestException; + + /** * Obtain a global write lock on the leader. The lock only blocks writers. * Readers may continue to execute without delay. * <p> Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/HAGlueDelegate.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -50,6 +50,8 @@ import com.bigdata.ha.msg.IHARootBlockRequest; import com.bigdata.ha.msg.IHARootBlockResponse; import com.bigdata.ha.msg.IHASendStoreResponse; +import com.bigdata.ha.msg.IHASnapshotDigestRequest; +import com.bigdata.ha.msg.IHASnapshotDigestResponse; import com.bigdata.ha.msg.IHASnapshotRequest; import com.bigdata.ha.msg.IHASnapshotResponse; import com.bigdata.ha.msg.IHASyncRequest; @@ -239,6 +241,13 @@ } @Override + public IHASnapshotDigestResponse computeHASnapshotDigest( + final IHASnapshotDigestRequest req) throws IOException, + NoSuchAlgorithmException, DigestException { + return delegate.computeHASnapshotDigest(req); + } + + @Override public Future<Void> globalWriteLock(final IHAGlobalWriteLockRequest req) throws IOException, TimeoutException, InterruptedException { return delegate.globalWriteLock(req); Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/althalog/HALogFile.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/althalog/HALogFile.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/althalog/HALogFile.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -853,8 +853,8 @@ // The backing ByteBuffer. final ByteBuffer b = buf.buffer(); - // A byte[] with the same capacity as that ByteBuffer. - final byte[] a = new byte[b.capacity()]; +// // A byte[] with the same capacity as that ByteBuffer. +// final byte[] a = new byte[b.capacity()]; // The capacity of that buffer (typically 1MB). final int bufferCapacity = b.capacity(); @@ -893,14 +893,19 @@ // read block FileChannelUtility.readAll(reopener, b, offset); - // Copy data into our byte[]. - final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); +// // Copy data into our byte[]. +// final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); // update digest - digest.digest(c, 0/* off */, nbytes/* len */); +// digest.update(c, 0/* off */, nbytes/* len */); + digest.update(b); + offset += nbytes; + remaining -= nbytes; + sequence++; + } if (log.isInfoEnabled()) Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/halog/HALogReader.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/halog/HALogReader.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/halog/HALogReader.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -513,8 +513,8 @@ // The backing ByteBuffer. final ByteBuffer b = buf.buffer(); - // A byte[] with the same capacity as that ByteBuffer. - final byte[] a = new byte[b.capacity()]; +// // A byte[] with the same capacity as that ByteBuffer. +// final byte[] a = new byte[b.capacity()]; // The capacity of that buffer (typically 1MB). final int bufferCapacity = b.capacity(); @@ -553,14 +553,20 @@ // read block FileChannelUtility.readAll(reopener, b, offset); - // Copy data into our byte[]. - final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); +// // Copy data into our byte[]. +// final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); // update digest - digest.digest(c, 0/* off */, nbytes/* len */); - +// digest.update(c, 0/* off */, nbytes/* len */); + b.flip(); + digest.update(b); + + offset += nbytes; + remaining -= nbytes; - + + sequence++; + } if (log.isInfoEnabled()) Added: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestRequest.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestRequest.java (rev 0) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestRequest.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -0,0 +1,47 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.ha.msg; + +import java.io.Serializable; + +public class HASnapshotDigestRequest implements IHASnapshotDigestRequest, Serializable { + + private static final long serialVersionUID = 1L; + + private final long commitCounter; + + public HASnapshotDigestRequest(final long commitCounter) { + + this.commitCounter = commitCounter; + + } + + @Override + public long getCommitCounter() { + + return commitCounter; + + } + +} Added: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestResponse.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestResponse.java (rev 0) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/HASnapshotDigestResponse.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -0,0 +1,57 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.ha.msg; + +import java.io.Serializable; + +public class HASnapshotDigestResponse implements IHASnapshotDigestResponse, Serializable { + + private static final long serialVersionUID = 1L; + + private final long commitCounter; + private final byte[] digest; + + public HASnapshotDigestResponse(final long commitCounter, final byte[] digest) { + + this.commitCounter = commitCounter; + + this.digest = digest; + + } + + @Override + public long getCommitCounter() { + + return commitCounter; + + } + + @Override + public byte[] getDigest() { + + return digest; + + } + +} Added: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestRequest.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestRequest.java (rev 0) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestRequest.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -0,0 +1,40 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.ha.msg; + + +/** + * Message used to request the digest of the snapshot file associated with + * a specified commit point. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public interface IHASnapshotDigestRequest extends IHAMessage { + + /** + * The commit counter snapshot. + */ + long getCommitCounter(); + +} Added: branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestResponse.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestResponse.java (rev 0) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/ha/msg/IHASnapshotDigestResponse.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -0,0 +1,44 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.ha.msg; + +/** + * Message used to communicate the digest of an snapshot file associated with a + * specific commit point. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + */ +public interface IHASnapshotDigestResponse extends IHAMessage { + + /** + * The commit counter for snapshot + */ + long getCommitCounter(); + + /** + * The computed disgest. + */ + byte[] getDigest(); + +} Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -118,6 +118,8 @@ import com.bigdata.ha.msg.IHARootBlockRequest; import com.bigdata.ha.msg.IHARootBlockResponse; import com.bigdata.ha.msg.IHASendStoreResponse; +import com.bigdata.ha.msg.IHASnapshotDigestRequest; +import com.bigdata.ha.msg.IHASnapshotDigestResponse; import com.bigdata.ha.msg.IHASnapshotRequest; import com.bigdata.ha.msg.IHASnapshotResponse; import com.bigdata.ha.msg.IHASyncRequest; @@ -5523,6 +5525,15 @@ } @Override + public IHASnapshotDigestResponse computeHASnapshotDigest( + final IHASnapshotDigestRequest req) throws IOException, + NoSuchAlgorithmException, DigestException { + + throw new UnsupportedOperationException(); + + } + + @Override public Future<Void> globalWriteLock(final IHAGlobalWriteLockRequest req) throws IOException, TimeoutException, InterruptedException { Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/IHABufferStrategy.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -179,7 +179,8 @@ Object snapshotAllocators(); /** - * Compute the digest. + * Compute the digest of the entire backing store (including the magic, file + * version, root blocks, etc). * <p> * Note: The digest is not reliable unless you either use a snapshot or * suspend writes (on the quorum) while it is computed. Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/RootBlockView.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/RootBlockView.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/RootBlockView.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -958,12 +958,36 @@ } private static final String toString(final DateFormat df, final long t) { - + return Long.toString(t) + (t != 0L ? " [" + df.format(new Date(t)) + "]" : ""); - + } - + + private static DateFormat getDateFormat() { + + final DateFormat df = DateFormat.getDateTimeInstance( + DateFormat.FULL/* dateStyle */, DateFormat.FULL/* timeStyle */); + + return df; + + } + + /** + * Format a commit time as the raw milliseconds since the epoch value plus a + * fully expressed date and time. + * + * @param t + * The commit time. + * + * @return The date and time strong. + */ + public static String toString(final long t) { + + return toString(getDateFormat(), t); + + } + public long getMetaBitsAddr() { if (getVersion() < VERSION1) { // Always WORM store before VERSION1 Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/WORMStrategy.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/journal/WORMStrategy.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -2740,8 +2740,8 @@ // The backing ByteBuffer. final ByteBuffer b = buf.buffer(); - // A byte[] with the same capacity as that ByteBuffer. - final byte[] a = new byte[b.capacity()]; +// // A byte[] with the same capacity as that ByteBuffer. +// final byte[] a = new byte[b.capacity()]; // The capacity of that buffer (typically 1MB). final int bufferCapacity = b.capacity(); @@ -2780,14 +2780,17 @@ // read block readRaw(/*nbytes,*/ offset, b); - // Copy data into our byte[]. - final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); +// // Copy data into our byte[]. +// final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); // update digest - digest.digest(c, 0/* off */, nbytes/* len */); +// digest.update(c, 0/* off */, nbytes/* len */); + digest.update(b); remaining -= nbytes; - + + offset += nbytes; + sequence++; } Modified: branches/READ_CACHE/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/READ_CACHE/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -6045,8 +6045,8 @@ // The backing ByteBuffer. final ByteBuffer b = buf.buffer(); - // A byte[] with the same capacity as that ByteBuffer. - final byte[] a = new byte[b.capacity()]; +// // A byte[] with the same capacity as that ByteBuffer. +// final byte[] a = new byte[b.capacity()]; // The capacity of that buffer (typically 1MB). final int bufferCapacity = b.capacity(); @@ -6061,7 +6061,7 @@ long remaining = totalBytes; // The offset of the current block. - long offset = FileMetadata.headerSize0; // 0L; + long offset = 0L; // The block sequence. long sequence = 0L; @@ -6085,14 +6085,17 @@ // read block readRaw(/*nbytes,*/ offset, b); - // Copy data into our byte[]. - final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); +// // Copy data into our byte[]. +// final byte[] c = BytesUtil.toArray(b, false/* forceCopy */, a); // update digest - digest.digest(c, 0/* off */, nbytes/* len */); + //digest.update(c, 0/* off */, nbytes/* len */); + digest.update(b); remaining -= nbytes; + offset += nbytes; + sequence++; } Modified: branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/CommitTimeIndex.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/CommitTimeIndex.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/CommitTimeIndex.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -301,6 +301,7 @@ final long snapshotCommitCounter; synchronized (this) { + @SuppressWarnings("unchecked") final ITupleIterator<IRootBlockView> itr = rangeIterator(null/* fromKey */, null/* toKey */, 1/* capacity */, IRangeQuery.DEFAULT Modified: branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/DefaultRestorePolicy.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/DefaultRestorePolicy.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/DefaultRestorePolicy.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -75,8 +75,7 @@ final long then = now - millis; - final IRootBlockView rootBlock = jnl.getSnapshotManager() - .getSnapshotIndex().find(then); + final IRootBlockView rootBlock = jnl.getSnapshotManager().find(then); if (rootBlock == null) { Modified: branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -69,6 +69,7 @@ import com.bigdata.ha.msg.HALogDigestResponse; import com.bigdata.ha.msg.HALogRootBlocksResponse; import com.bigdata.ha.msg.HASendStoreResponse; +import com.bigdata.ha.msg.HASnapshotDigestResponse; import com.bigdata.ha.msg.IHADigestRequest; import com.bigdata.ha.msg.IHADigestResponse; import com.bigdata.ha.msg.IHAGlobalWriteLockRequest; @@ -79,6 +80,8 @@ import com.bigdata.ha.msg.IHALogRootBlocksResponse; import com.bigdata.ha.msg.IHARebuildRequest; import com.bigdata.ha.msg.IHASendStoreResponse; +import com.bigdata.ha.msg.IHASnapshotDigestRequest; +import com.bigdata.ha.msg.IHASnapshotDigestResponse; import com.bigdata.ha.msg.IHASnapshotRequest; import com.bigdata.ha.msg.IHASnapshotResponse; import com.bigdata.ha.msg.IHASyncRequest; @@ -992,6 +995,32 @@ } + @Override + public IHASnapshotDigestResponse computeHASnapshotDigest( + final IHASnapshotDigestRequest req) throws IOException, + NoSuchAlgorithmException, DigestException { + + if (haLog.isDebugEnabled()) + haLog.debug("req=" + req); + + // The commit counter of the desired closing root block. + final long commitCounter = req.getCommitCounter(); + + final MessageDigest digest = MessageDigest.getInstance("MD5"); + + /* + * Compute digest for snapshot for that commit point. + * + * Note: throws FileNotFoundException if no snapshot for that commit + * point. + */ + getSnapshotManager().getDigest(commitCounter, digest); + + return new HASnapshotDigestResponse(req.getCommitCounter(), + digest.digest()); + + } + /** * {@inheritDoc} * Modified: branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -1507,7 +1507,7 @@ */ while (true) { - long commitCounter = journal.getRootBlockView() + final long commitCounter = journal.getRootBlockView() .getCommitCounter(); try { @@ -1515,6 +1515,28 @@ final IHALogReader r = journal.getHALogWriter() .getReader(commitCounter + 1); + if (r.isEmpty()) { + + /* + * There is an empty HALog file. We can not apply it + * since it has no data. This ends our restore + * procedure. + */ + + break; + + } + + if (r.getOpeningRootBlock().getCommitCounter() != commitCounter) { + // Sanity check + throw new AssertionError(); + } + + if (r.getClosingRootBlock().getCommitCounter() != commitCounter + 1) { + // Sanity check + throw new AssertionError(); + } + applyHALog(r); doLocalCommit(r.getClosingRootBlock()); Modified: branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HARestore.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HARestore.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HARestore.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -50,8 +50,6 @@ * it forward to a specific commit point. * * @author <a href="mailto:tho...@us...">Bryan Thompson</a> - * - * FIXME HARestore : write test suite. */ public class HARestore { Modified: branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/SnapshotManager.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/SnapshotManager.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/java/com/bigdata/journal/jini/ha/SnapshotManager.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -27,11 +27,17 @@ import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; +import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FilenameFilter; import java.io.IOException; +import java.io.InputStream; import java.nio.ByteBuffer; +import java.security.DigestException; +import java.security.MessageDigest; import java.util.Formatter; +import java.util.LinkedList; +import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; @@ -47,6 +53,8 @@ import org.apache.log4j.Logger; import com.bigdata.btree.BytesUtil; +import com.bigdata.btree.ITuple; +import com.bigdata.btree.ITupleIterator; import com.bigdata.concurrent.FutureTaskMon; import com.bigdata.ha.halog.IHALogReader; import com.bigdata.ha.msg.HASnapshotResponse; @@ -59,6 +67,7 @@ import com.bigdata.journal.RootBlockUtility; import com.bigdata.journal.RootBlockView; import com.bigdata.quorum.QuorumException; +import com.bigdata.rawstore.Bytes; import com.bigdata.util.ChecksumUtility; /** @@ -78,7 +87,7 @@ /** * The file extension for journal snapshots. */ - public final static String SNAPSHOT_EXT = ".jnl.zip"; + public final static String SNAPSHOT_EXT = ".jnl.gz"; /** * The prefix for the temporary files used to generate snapshots. @@ -112,6 +121,12 @@ * populated when the {@link HAJournal} starts from the file system and * maintained as snapshots are taken or destroyed. All operations on this * index MUST be synchronized on its object monitor. + * <p> + * Note: This index is not strictly necessary. We can also visit the files + * in the file system. However, the index makes it much faster to locate a + * specific snapshot based on a commit time and provides low latency access + * to the {@link IRootBlockView} for that snapshot (faster than opening the + * snapshot file on the disk). */ private final CommitTimeIndex snapshotIndex; @@ -155,18 +170,18 @@ } - /** - * An in memory index over the last commit time of each snapshot. This is - * populated when the {@link HAJournal} starts from the file system and - * maintained as snapshots are taken or destroyed. All operations on this - * index MUST be synchronized on its object monitor. - */ - CommitTimeIndex getSnapshotIndex() { +// /** +// * An in memory index over the last commit time of each snapshot. This is +// * populated when the {@link HAJournal} starts from the file system and +// * maintained as snapshots are taken or destroyed. All operations on this +// * index MUST be synchronized on its object monitor. +// */ +// private CommitTimeIndex getSnapshotIndex() { +// +// return snapshotIndex; +// +// } - return snapshotIndex; - - } - public SnapshotManager(final HAJournalServer server, final HAJournal journal, final Configuration config) throws IOException, ConfigurationException { @@ -444,6 +459,69 @@ } /** + * Find the commit counter for the most recent snapshot (if any). + * + * @return That commit counter -or- ZERO (0L) if there are no snapshots. + */ + public long getMostRecentSnapshotCommitCounter() { + + return snapshotIndex.getMostRecentSnapshotCommitCounter(); + + } + + /** + * Return the {@link IRootBlock} identifying the snapshot having the largest + * commitTime that is less than or equal to the given value. + * + * @param timestamp + * The given timestamp. + * + * @return The {@link IRootBlockView} of the identified snapshot -or- + * <code>null</code> iff there are no snapshots in the index that + * satisify the probe. + * + * @throws IllegalArgumentException + * if <i>timestamp</i> is less than or equals to ZERO (0L). + */ + public IRootBlockView find(final long timestamp) { + + return snapshotIndex.find(timestamp); + + } + + /** + * Return a list of all known snapshots. The list consists of the + * {@link IRootBlockView} for each snapshot. The list will be in order of + * increasing <code>commitTime</code>. This should also correspond to + * increasing <code>commitCounter</code>. + * + * @return A list of the {@link IRootBlockView} for the known snapshots. + */ + public List<IRootBlockView> getSnapshots() { + + final List<IRootBlockView> l = new LinkedList<IRootBlockView>(); + + synchronized (snapshotIndex) { + + @SuppressWarnings("unchecked") + final ITupleIterator<IRootBlockView> itr = snapshotIndex.rangeIterator(); + + while(itr.hasNext()) { + + final ITuple<IRootBlockView> t = itr.next(); + + final IRootBlockView rootBlock = t.getObject(); + + l.add(rootBlock); + + } + + } + + return l; + } + + /** * Return the {@link Future} of the current snapshot operation (if any). * * @return The {@link Future} of the current snapshot operation -or- @@ -950,4 +1028,73 @@ } // class SendStoreTask + + /** + * Compute the digest of a snapshot file. + * <p> + * Note: The digest is only computed for the data beyond the file header. + * This is for consistency with + * {@link IHABufferStrategy#computeDigest(Object, MessageDigest)} + * + * @param commitCounter + * @param digest + * @throws IOException + * @throws FileNotFoundException + * @throws DigestException + * + * TODO We should pin the snapshot if we are reading it to + * compute its digest. + */ + public void getDigest(final long commitCounter, final MessageDigest digest) + throws FileNotFoundException, IOException, DigestException { + + final File file = getSnapshotFile(commitCounter); + + // Note: Throws FileNotFoundException. + final GZIPInputStream is = new GZIPInputStream( + new FileInputStream(file)); + + try { + + if (log.isInfoEnabled()) + log.info("Computing digest: " + file); + + computeDigest(is, digest); + + } finally { + + is.close(); + + } + + } + + private static void computeDigest(final InputStream is, + final MessageDigest digest) throws DigestException, IOException { + + // The capacity of that buffer. + final int bufferCapacity = Bytes.kilobyte32 * 4; + + // A byte[] with the same capacity as that ByteBuffer. + final byte[] a = new byte[bufferCapacity]; + + while (true) { + + // Read as much as we can. + final int nread = is.read(a, 0/* off */, a.length); + + if (nread == -1) { + + // End of stream. + return; + + } + + // update digest + digest.update(a, 0/* off */, nread/* len */); + + } + + } + } Modified: branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -29,8 +29,10 @@ import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; +import java.math.BigInteger; import java.security.DigestException; import java.security.NoSuchAlgorithmException; +import java.util.Arrays; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; @@ -52,6 +54,8 @@ import com.bigdata.ha.HAGlue; import com.bigdata.ha.msg.HADigestRequest; import com.bigdata.ha.msg.HALogDigestRequest; +import com.bigdata.ha.msg.HARootBlockRequest; +import com.bigdata.ha.msg.HASnapshotDigestRequest; import com.bigdata.io.TestCase3; import com.bigdata.rdf.sail.TestConcurrentKBCreate; import com.bigdata.rdf.sail.webapp.NanoSparqlServer; @@ -670,6 +674,69 @@ } /** + * Verify the the digest of the journal is equal to the digest of the + * indicated snapshot on the specified service. + * <p> + * Note: This can only succeed if the journal is at the specififed commit + * point. If there are concurrent writes on the journal, then it's digest + * will no longer be consistent with the snapshot. + * + * @param service + * The service. + * @param commitCounter + * The commit counter for the snapshot. + * + * @throws NoSuchAlgorithmException + * @throws DigestException + * @throws IOException + */ + protected void assertSnapshotDigestEquals(final HAGlue service, + final long commitCounter) throws NoSuchAlgorithmException, + DigestException, IOException { + + final long commitCounterBefore = service + .getRootBlock(new HARootBlockRequest(null/* storeUUID */)) + .getRootBlock().getCommitCounter(); + + // Verify the journal is at the desired commit point. + assertEquals(commitCounter, commitCounterBefore); + + final byte[] journalDigest = service.computeDigest( + new HADigestRequest(null/* storeUUID */)).getDigest(); + + final long commitCounterAfter = service + .getRootBlock(new HARootBlockRequest(null/* storeUUID */)) + .getRootBlock().getCommitCounter(); + + // Verify the journal is still at the desired commit point. + assertEquals(commitCounter, commitCounterAfter); + + final byte[] snapshotDigest = service.computeHASnapshotDigest( + new HASnapshotDigestRequest(commitCounter)).getDigest(); + + if (!BytesUtil.bytesEqual(journalDigest, snapshotDigest)) { + + /* + * Note: Provides base 16 rendering as per normal md5 runs. + */ + + final String journalStr = new BigInteger(1, journalDigest) + .toString(16); + + final String snapshotStr = new BigInteger(1, snapshotDigest) + .toString(16); + + fail("journal=" + journalStr + ", snapshot=" + snapshotStr); + +// fail("journal=" + Arrays.toString(journalDigest) + ", snapshot=" +// + Arrays.toString(snapshotDigest) + " for commitCounter=" +// + commitCounter + " on service=" + service); + + } + + } + + /** * Return the name of the foaf data set. * * @param string Modified: branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3SnapshotPolicy.java =================================================================== --- branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3SnapshotPolicy.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3SnapshotPolicy.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -32,10 +32,7 @@ import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; -import java.util.zip.GZIPInputStream; -import net.jini.config.Configuration; - import org.apache.http.HttpResponse; import org.apache.http.client.HttpClient; import org.apache.http.impl.client.DefaultHttpClient; @@ -49,22 +46,14 @@ import com.bigdata.journal.IRootBlockView; import com.bigdata.quorum.Quorum; import com.bigdata.rdf.sail.webapp.client.ConnectOptions; +import com.bigdata.rdf.sail.webapp.client.HAStatusEnum; import com.bigdata.rdf.sail.webapp.client.RemoteRepository; /** * Test suites for an {@link HAJournalServer} quorum with a replication factor * of THREE (3) and a fully met {@link Quorum}. * - * TODO Verify that the snapshot is consistent in the data with the Journal. - * This could be done using a digest comparison of the journal and a digest - * computed from a {@link GZIPInputStream} reading from the snapshot. [The same - * approach could also work for snapshots under sustained writes if we modify - * the journal digest logic to only pay attention to the committed allocations.] - * - * TODO Verify snapshot is refused for service that is not joined with the met - * quorum. - * - * TODO HARestore test suite: Verify that the snapshot may be unziped and halogs + * FIXME HARestore test suite: Verify that the snapshot may be unziped and halogs * applied by the {@link HARestore} utility in order to obtain a journal * corresponding to a specific commit point. * <p> @@ -74,7 +63,7 @@ * TODO Verify will not take snapshot if size on disk of HALog files since the * last snapshot is LT some percentage. * - * TODO Verify release of old snapshot(s) and HALog(s) when a new snapshot is + * FIXME Verify release of old snapshot(s) and HALog(s) when a new snapshot is * taken in accordence with the {@link IRestorePolicy}. * <p> * Make sure that we never release the most current snapshot or HALogs required @@ -97,26 +86,57 @@ super(name); } +// /** +// * {@inheritDoc} +// * <p> +// * Note: This overrides some {@link Configuration} values for the +// * {@link HAJournalServer} in order to establish conditions suitable for +// * testing the {@link ISnapshotPolicy} and {@link IRestorePolicy}. +// */ +// @Override +// protected String[] getOverrides() { +// +// return new String[]{ +// "com.bigdata.journal.jini.ha.HAJournalServer.snapshotPolicy=new com.bigdata.journal.jini.ha.DefaultSnapshotPolicy()" +// }; +// +// } + /** - * {@inheritDoc} - * <p> - * Note: This overrides some {@link Configuration} values for the - * {@link HAJournalServer} in order to establish conditions suitable for - * testing the {@link ISnapshotPolicy} and {@link IRestorePolicy}. + * Start A. Verify that we can not take a snapshot since it is not joined + * with the met quorum. */ - @Override - protected String[] getOverrides() { + public void testA_snapshot_refused_since_not_met() throws Exception { + + // Start A. + final HAGlue serverA = startA(); + + // Verify the REST API is up and service is not ready. + // TODO Might have to retry this if 404 observed. + assertEquals(HAStatusEnum.NotReady, getNSSHAStatus(serverA)); - return new String[]{ - "com.bigdata.journal.jini.ha.HAJournalServer.snapshotPolicy=new com.bigdata.journal.jini.ha.DefaultSnapshotPolicy()" - }; + // Request a snapshot. + final Future<IHASnapshotResponse> ft = serverA + .takeSnapshot(new HASnapshotRequest(0/* percentLogSize */)); + + if(ft == null) { + // Ok. No snapshot will be taken. + return; + + } + + ft.cancel(true/* mayInterruptIfRunning */); + + fail("Not expecting a future since service is not joined with a met quorum."); + } /** * Start two services. The quorum meets. Take a snapshot. Verify that the * snapshot appears within a resonable period of time and that it is for - * <code>commitCounter:=1</code> (just the KB create). + * <code>commitCounter:=1</code> (just the KB create). Verify that the + * digest of the snapshot agrees with the digest of the journal. */ public void testAB_snapshot() throws Exception { @@ -132,7 +152,7 @@ awaitKBExists(serverA); final HAGlue leader = quorum.getClient().getLeader(token); - + assertEquals(serverA, leader); // A is the leader. { // Verify quorum is still valid. @@ -161,13 +181,18 @@ final IRootBlockView snapshotRB = ft.get().getRootBlock(); + final long commitCounter = 1L; + // Verify snapshot is for the expected commit point. - assertEquals(1L, snapshotRB.getCommitCounter()); + assertEquals(commitCounter, snapshotRB.getCommitCounter()); // Snapshot directory contains the desired filename. assertEquals(new String[] { "00000000000000000001" + SnapshotManager.SNAPSHOT_EXT }, getSnapshotDirA().list()); + // Verify digest of snapshot agrees with digest of journal. + assertSnapshotDigestEquals(leader, commitCounter); + } } @@ -176,7 +201,8 @@ * Start two services. The quorum meets. Take a snapshot using B (NOT the * leader). Verify that the snapshot appears within a resonable period of * time and that it is for <code>commitCounter:=1</code> (just the KB - * create). + * create). Verify that the digest of the snapshot agrees with the digest of + * the journal. */ public void testAB_snapshotB() throws Exception { @@ -225,8 +251,10 @@ final IRootBlockView snapshotRB = ft.get().getRootBlock(); + final long commitCounter = 1L; + // Verify snapshot is for the expected commit point. - assertEquals(1L, snapshotRB.getCommitCounter()); + assertEquals(commitCounter, snapshotRB.getCommitCounter()); // Snapshot directory remains empty on A. assertEquals(0, getSnapshotDirA().list().length); @@ -235,6 +263,9 @@ assertEquals(new String[] { "00000000000000000001" + SnapshotManager.SNAPSHOT_EXT }, getSnapshotDirB().list()); + // Verify digest of snapshot agrees with digest of journal. + assertSnapshotDigestEquals(serverB, commitCounter); + } } @@ -289,13 +320,18 @@ final IRootBlockView snapshotRB = ft.get().getRootBlock(); + final long commitCounter = 1L; + // Verify snapshot is for the expected commit point. - assertEquals(1L, snapshotRB.getCommitCounter()); + assertEquals(commitCounter, snapshotRB.getCommitCounter()); // Snapshot directory contains the desired filename. assertEquals(new String[] { "00000000000000000001" + SnapshotManager.SNAPSHOT_EXT }, getSnapshotDirA().list()); + // Verify digest of snapshot agrees with digest of journal. + assertSnapshotDigestEquals(leader, commitCounter); + } /* @@ -367,7 +403,10 @@ doSnapshotRequest(leader); - // Get the Future. Should still be there, but if not then will be null. + /* + * Get the Future. Should still be there, but if not then will be + * null (it which case the snapshot is already done). + */ final Future<IHASnapshotResponse> ft = leader .takeSnapshot(new HASnapshotRequest(1000/* percentLogSize */)); @@ -395,6 +434,8 @@ assertEquals(new String[] { "00000000000000000001" + SnapshotManager.SNAPSHOT_EXT }, getSnapshotDirA().list()); + assertSnapshotDigestEquals(leader, 1L/* commitCounter */); + } } @@ -528,8 +569,10 @@ final IRootBlockView snapshotRB = ft.get().getRootBlock(); + final long commitCounter = 2L; + // Verify snapshot is for the expected commit point. - assertEquals(2L, snapshotRB.getCommitCounter()); + assertEquals(commitCounter, snapshotRB.getCommitCounter()); // Snapshot directory contains the desired filename. assertEquals( @@ -537,6 +580,9 @@ getSnapshotDirA(), 2L).getName() }, getSnapshotDirA().list()); + // Verify digest of snapshot agrees with digest of journal. + assertSnapshotDigestEquals(leader, commitCounter); + } } Modified: branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/HAStatusServletUtil.java =================================================================== --- branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/HAStatusServletUtil.java 2013-03-20 19:04:55 UTC (rev 7022) +++ branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/HAStatusServletUtil.java 2013-03-20 19:09:58 UTC (rev 7023) @@ -26,6 +26,11 @@ import java.io.FilenameFilter; import java.io.IOException; import java.io.PrintWriter; +import java.math.BigInteger; +import java.security.DigestException; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.List; import java.util.UUID; import java.util.concurrent.TimeoutException; @@ -36,9 +41,12 @@ import com.bigdata.ha.HAGlue; import com.bigdata.ha.QuorumService; +import com.bigdata.ha.halog.HALogReader; import com.bigdata.ha.halog.IHALogReader; import com.bigdata.ha.msg.HASnapshotRequest; import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.IRootBlockView; +import com.bigdata.journal.RootBlockView; import com.bigdata.journal.jini.ha.HAJournal; import com.bigdata.journal.jini.ha.SnapshotManager; import com.bigdata.quorum.AsynchronousQuorumCloseException; @@ -104,6 +112,8 @@ final QuorumService<HAGlue> quorumService = quorum.getClient(); + final boolean digests = req.getParameter(StatusServlet.DIGESTS) != null; + current.node("h1", "High Availability"); // The quorum state. @@ -144,8 +154,25 @@ { final File file = journal.getFile(); if (file != null) { + String digestStr = null; + if (digests) { + try { + final MessageDigest digest = MessageDigest + .getInstance("MD5"); + journal.getBufferStrategy().computeDigest( + null/* snapshot */, digest); + digestStr = new BigInteger(1, digest.digest()) + .toString(16); + } catch (NoSuchAlgorithmException ex) { + // ignore + } catch (DigestException ex) { + // ignore + } + } p.text("HAJournal: file=" + file + ", nbytes=" - + journal.size()).node("br").close(); + + journal.size() + + (digestStr == null ? "" : ", md5=" + digestStr)) + .node("br").close(); } } @@ -168,16 +195,54 @@ nbytes += file.length(); nfiles++; } - p.text("HALogDir: nfiles=" + nfiles + ", nbytes=" - + nbytes + ", path=" + haLogDir).node("br") - .close(); + p.text("HALogDir: nfiles=" + nfiles + ", nbytes=" + nbytes + + ", path=" + haLogDir).node("br").close(); + if (digests) { + /* + * List each HALog file together with its digest. + * + * FIXME We need to request the live log differently and use + * the lock for it. That makes printing the HALog digests + * here potentially probemantic if there are outstanding + * writes. + */ + for (File file : a) { + String digestStr = null; + final IHALogReader r = new HALogReader(file); + try { + if (digests && !r.isEmpty()) { + try { + final MessageDigest digest = MessageDigest + .getInstance("MD5"); + r.computeDigest(digest); + digestStr = new BigInteger(1, + digest.digest()).toString(16); + } catch (NoSuchAlgorithmException ex) { + // ignore + } catch (DigestException ex) { + // ignore + } + } + } finally { + r.close(); + } + p.text("HALogFile: closingCommitCounter=" + + r.getClosingRootBlock().getCommitCounter() + + ", file=" + + file + + ", nbytes=" + + nbytes + + (digestStr == null ? "" : ", md5=" + + digestStr)).node("br").close(); + } + } } /* * Report #of files and bytes in the snapshot directory. */ { - final File snapshotDir = ((HAJournal) journal) + final File snapshotDir = journal .getSnapshotManager().getSnapshotDir(); final File[] a = snapshotDir.listFiles(new FilenameFilter() { @Override @@ -193,6 +258,44 @@ } p.text("SnapshotDir: nfiles=" + nfiles + ", nbytes=" + nbytes + ", path=" + snapshotDir).node("br").close(); + + if (true) { + + /* + * List the available snapshots. + */ + final List<IRootBlockView> snapshots = journal + .getSnapshotManager().getSnapshots(); + + for (IRootBlockView rb : snapshots) { + + String digestStr = null; + if (digests) { + try { + final MessageDigest digest = MessageDigest + .getInstance("MD5"); + journal.getSnapshotManager().getDigest( + rb.getCommitCounter(), digest); + digestStr = new BigInteger(1, digest.digest()) + .toString(16); + } catch (NoSuchAlgorithmException ex) { + // ignore + } catch (DigestException ex) { + // ignore + } + } + + p.text("SnapshotFile: commitTime=" + + RootBlockView.toString(rb.getLastCommitTime()) + + ", commitCounter=" + + rb.getCommitCounter() + + (digestStr == null ? "" : ", md5=" + + digestStr)).node("br").close(); + + } + + } + } /* Modified: branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java =================================================================== --- branches/READ_CACHE/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/Stat... [truncated message content] |