Update of /cvsroot/cweb/bigdata/src/java/com/bigdata/journal
In directory sc8-pr-cvs4.sourceforge.net:/tmp/cvs-serv24584/src/java/com/bigdata/journal
Modified Files:
ResourceManager.java AbstractJournal.java
Log Message:
Fixed a bug in UUID assignment to index segments.
Added some logic for fast data output for btree nodes and leaves.
Index: ResourceManager.java
===================================================================
RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/ResourceManager.java,v
retrieving revision 1.2
retrieving revision 1.3
diff -C2 -d -r1.2 -r1.3
*** ResourceManager.java 29 Mar 2007 17:01:33 -0000 1.2
--- ResourceManager.java 10 Apr 2007 18:33:31 -0000 1.3
***************
*** 53,56 ****
--- 53,57 ----
import org.apache.log4j.Logger;
+ import com.bigdata.objndx.AbstractBTree;
import com.bigdata.objndx.IndexSegment;
import com.bigdata.objndx.IndexSegmentBuilder;
***************
*** 179,182 ****
--- 180,185 ----
* finalized (unisolated vs isolated vs index segment can be
* identified based on their interfaces).
+ *
+ * @todo add reporting for {@link AbstractBTree#reopen()}.
*/
static public void closeUnisolatedBTree(String name) {
Index: AbstractJournal.java
===================================================================
RCS file: /cvsroot/cweb/bigdata/src/java/com/bigdata/journal/AbstractJournal.java,v
retrieving revision 1.10
retrieving revision 1.11
diff -C2 -d -r1.10 -r1.11
*** AbstractJournal.java 4 Apr 2007 16:52:16 -0000 1.10
--- AbstractJournal.java 10 Apr 2007 18:33:31 -0000 1.11
***************
*** 158,167 ****
* to secondary journals on failover hosts. </li>
* <li> Scale-up database (automatic re-partitioning of indices and processing
! * of deletion markers).</li>
! * <li> AIO for the Direct and Disk modes.</li>
* <li> GOM integration, including: support for primary key (clustered) indices;
* using queues from GOM to journal/database segment server supporting both
* embedded and remote scenarios; and using state-based conflict resolution to
! * obtain high concurrency for generic objects, link set metadata, and indices.</li>
* <li> Scale-out database, including:
* <ul>
--- 158,170 ----
* to secondary journals on failover hosts. </li>
* <li> Scale-up database (automatic re-partitioning of indices and processing
! * of deletion markers). Note that the split point must be choosen with some
! * awareness of the application keys in order to provide an atomic row update
! * guarentee when using keys formed as { primaryKey, columnName, timestamp }.</li>
! * <li> AIO for the Direct and Disk modes (low priority since not IO bound).</li>
* <li> GOM integration, including: support for primary key (clustered) indices;
* using queues from GOM to journal/database segment server supporting both
* embedded and remote scenarios; and using state-based conflict resolution to
! * obtain high concurrency for generic objects, link set metadata, indices, and
! * distributed split cache and hot cache support.</li>
* <li> Scale-out database, including:
* <ul>
***************
*** 177,180 ****
--- 180,189 ----
* </ol>
*
+ * @todo Move Addr methods onto IRawStore and store the bitsplit point in the
+ * root block. This will let us provision the journal to have more
+ * distinct offsets at which records can be written (e.g., 35 bits for
+ * that vs 32) by accepting a maximum record length with fewer bytes (29
+ * vs 32). The bitsplit point can be choosen differently for each store.
+ *
* @todo Define distributed transaction protocol. Pay attention to 2-phase or
* 3-phase commits when necessary, but take advantage of locality when
***************
*** 190,198 ****
* failover.
*
! * @todo I need to revisit the assumptions for very large objects in the face of
! * the recent / planned redesign. I expect that using an index with a key
! * formed as [URI][chuck#] would work just fine. Chunks would then be
! * limited to 32k or so. Writes on such indices should probably be
! * directed to a journal using a disk-only mode.
*
* @todo Checksums and/or record compression are currently handled on a per-{@link BTree}
--- 199,206 ----
* failover.
*
! * @todo I need to revisit the assumptions for very large objects. I expect that
! * using an index with a key formed as [URI][chuck#] would work just fine.
! * Chunks would then be limited to 32k or so. Writes on such indices
! * should probably be directed to a journal using a disk-only mode.
*
* @todo Checksums and/or record compression are currently handled on a per-{@link BTree}
|