Kevin,
 
I agree with Alex's point.  DBCache presumes a page locking strategy which would prevent (1) from
occurring.
 
Overall my thinking on row/page/segment locking is that we need to get engaged in a new transaction
engine, which will be of direct benefit.  With that in hand we can consider row locking strategies.  I
would rather duplicate DBCache first and then examine row locking solutions.
 
-bryan
-----Original Message-----
From: jdbm-developer-admin@lists.sourceforge.net [mailto:jdbm-developer-admin@lists.sourceforge.net] On Behalf Of Kevin Day
Sent: Sunday, January 15, 2006 2:13 PM
To: JDBM Developer listserv
Subject: [Jdbm-developer] DBCache discussion

Hi all- just got done reading the DB Cache paper, and I had a couple of questions/thoughts I wanted to get your opinions on.
 

Thought 1:  The first question has to do with long transaction support.  I believe that the long transaction support described in the paper has a problem with it, and I was wondering if you could help me understand...  In the paper, they suggest writing the *pre* transaction version of a page to a transaction specific file, and writing actual changes into the DB itself.  If a transaction has to be rolled back, the pre transaction version is restored into the DB.
 
This seems like it has one very serious problem when it comes to multiple transaction support:  If there are other transactions that begin after the long transaction begins, they will wind up restoring a changed page from the DB (the page won't be in the cache).  This could lead to reads of inconsistent data...
 
Am I missing something here?  It sure seems like it would make more sense to write changed pages (for pages that overflow the cache due to long transaction) to a per-transaction file.  Roll-back is performed by deleting the file.  Commit marks the transaction file, then copies data from the transaction file into the DB.  If the copy fails, restart can detect that the transaction file is marked as complete, and the copy and occur (similar to jdbm's current log file - but one file per long transaction).
 
Is there something important I'm missing here?  I can't imagine a scenario where storing a pre-transaction version of a page external to the DB, then updating the DB directly would ever make any sense...
 
 
 
Thought 2:  Because roll-back is supported at a page level only, some higher level synchronization and/or transaction mechanism is going to be required.  I do not think it is generally acceptable to have a high level transaction (i.e. the storage of an object/row of data) fail due to locking problems unless there is actually a locking issue with that particular object.  If the transactions are managed at the page level only, then an update to one row could fail due to work being done on a different row on the same page by a different transaction.  I think that if we are really going to do this, that we need to have some sort of blocking operation that kicks in when a page conflict arises that is not also a row conflict.
 
 
 
Thought 3:  Java itself may have some interesting implications to the implementation of the ring buffer/safe concept.  Using NIO, it may be faster to write data into the DB directly from the Safe itself, instead of from RAM.  It is possible to use low level transfer operations in NIO that allow the file system to handle the byte transfer, instead of having to move the bytes across the native/jvm boundary...
 
 
 
Please let me know your thoughts - especially on anything I might be missing with #1...
 
Cheers,
 
- K
 
 
------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click _______________________________________________ Jdbm-developer mailing list Jdbm-developer@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/jdbm-developer