Kevin,
 
wrt before/after images, I am inclined to implement DBCache as it is, get that working, and then look at how we would change it based on what we have learned.  I expect to learn quite a bit about this as I get into it more deeply.  You may be right that we need to write after pages to a per-transaction file in order to support row locking, but I would like to see how the whole thing plays out in practice first.  It might be a good idea to track down the author (who has continued to publish in this space) and put some questions like this to him.  He also had a longer version of the paper, but it was in German (PhD thesis).
 
> It seems to me that the development of a generic DB Cache implementation should be do-able in isolation from JDBM - but I think the TransactionManager API (if there ever was such a thing) needs a complete overhaul to make this happen.
 
> Is it even possible to have an interface to transaction manager that is independent of implementation?  If so, what would a starting point look like?
 
Good questions.  I see this as an API for a segment of pages treated as undifferentiated storage and the BaseRecordManager and record allocation policy as imposing a structure on how that segment is used.  In this frame the record level management becomes the application of the segment API.  I have seen this view put forward in the architecture of more recent systems.
 
I think of this as a "segment" API since I would like to provide for more than one "segment", with each segment decomposing into a database and a cache file.  A jdbm store could still be one "segment", as it is today, but it would be possible to break the store into multiple segments as well.
 
-bryan
-----Original Message-----
From: jdbm-developer-admin@lists.sourceforge.net [mailto:jdbm-developer-admin@lists.sourceforge.net] On Behalf Of Kevin Day
Sent: Tuesday, January 17, 2006 10:54 AM
To: JDBM Developer listserv
Subject: re[2]: [Jdbm-developer] DBCache discussion

Bryan-
 
I'm in agreement that having a cache like this will be a good idea - even if we don't get into multi-transaction support in the higher level layers right away.
 
If we go for long transactions (and I think we do need to), I vote for an implementation that writes *after* pages to a per-transaction file on disk as pages are evicted from the cache.  If a transaction needs access to a page that's in the transaction file, it can retrieve the contents from the file and return them to cache, etc...
 
It seems to me that the development of a generic DB Cache implementation should be do-able in isolation from JDBM - but I think the TransactionManager API (if there ever was such a thing) needs a complete overhaul to make this happen.
 
Is it even possible to have an interface to transaction manager that is independent of implementation?  If so, what would a starting point look like?
 
- K
 
> Kevin,
 
I  agree with Alex's point.  DBCache presumes a page locking strategy which  would prevent (1) from  
occurring.
 
Overall my thinking on row/page/segment locking is that we need to get  engaged in a new transaction
engine, which will be of direct benefit.  With that in hand we can  consider row locking strategies.  I
would  rather duplicate DBCache first and then examine row locking  solutions.
 
-bryan

-----Original Message-----
From:  jdbm-developer-admin@lists.sourceforge.net  [mailto:jdbm-developer-admin@lists.sourceforge.net] On Behalf Of Kevin  Day
Sent: Sunday, January 15, 2006 2:13 PM
To: JDBM  Developer listserv
Subject: [Jdbm-developer] DBCache  discussion


Hi all- just got done reading the DB Cache paper, and I  had a couple of questions/thoughts I wanted to get your opinions  on.
 

Thought 1:  The first question has to do with  long transaction support.  I believe that the long transaction support  described in the paper has a problem with it, and I was wondering if you could  help me understand...  In the paper, they suggest writing the *pre*  transaction version of a page to a transaction specific file, and writing  actual changes into the DB itself.  If a transaction has to be rolled  back, the pre transaction version is restored into the DB.

This seems like it has one very serious problem when it  comes to multiple transaction support:  If there are other transactions  that begin after the long transaction begins, they will wind up restoring a  changed page from the DB (the page won't be in the cache).  This could  lead to reads of inconsistent data...
 
Am I missing something here?  It sure seems like it  would make more sense to write changed pages (for pages that overflow the  cache due to long transaction) to a per-transaction file.  Roll-back is  performed by deleting the file.  Commit marks the transaction file, then  copies data from the transaction file into the DB.  If the copy fails,  restart can detect that the transaction file is marked as complete, and the  copy and occur (similar to jdbm's current log file - but one file per long  transaction).
 
Is there something important I'm missing here?  I  can't imagine a scenario where storing a pre-transaction version of a page  external to the DB, then updating the DB directly would ever make any  sense...
 
 
 
Thought 2:  Because roll-back is supported at a  page level only, some higher level synchronization and/or  transaction mechanism is going to be required.  I do not think it is  generally acceptable to have a high level transaction (i.e. the storage of an  object/row of data) fail due to locking problems unless there is actually a  locking issue with that particular object.  If the transactions are  managed at the page level only, then an update to one row could fail due to  work being done on a different row on the same page by a different  transaction.  I think that if we are really going to do this, that we  need to have some sort of blocking operation that kicks in when a page  conflict arises that is not also a row conflict.
 
 
 
Thought 3:  Java itself may have some interesting  implications to the implementation of the ring buffer/safe concept.   Using NIO, it may be faster to write data into the DB directly from the Safe  itself, instead of from RAM.  It is possible to use low level transfer  operations in NIO that allow the file system to handle the byte transfer,  instead of having to move the bytes across the native/jvm  boundary...
 
 
 
Please let me know your thoughts - especially on  anything I might be missing with #1...
 
Cheers,
 
- K


 
-------------------------------------------------------  This SF.net email is sponsored by: Splunk Inc. Do you grep through log files  for problems? Stop! Download the new AJAX search engine that makes searching  your log files as easy as surfing the web. DOWNLOAD SPLUNK!  http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click  _______________________________________________ Jdbm-developer mailing list  Jdbm-developer@lists.sourceforge.net  https://lists.sourceforge.net/lists/listinfo/jdbm-developer <
------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 _______________________________________________ Jdbm-developer mailing list Jdbm-developer@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/jdbm-developer