Yes, we run w/ transactions (except when bulk loading data).  I have been thinking of the MRU size in terms of the tradeoff between the expected cycle time until an object is reused and the memory constraints.  I have intend to modify the MRU to use the array-based native long hash map since it has a fixed size -- that seems to be an optimal choice. -bryan
-----Original Message-----
From: jdbm-developer-admin@lists.sourceforge.net [mailto:jdbm-developer-admin@lists.sourceforge.net] On Behalf Of Kevin Day
Sent: Thursday, September 22, 2005 11:46 AM
To: jdbm-developer@lists.sourceforge.net
Subject: [Jdbm-developer] re: Updates driven by insert operations

Bryan-
 
Yes - the behavior you describe would make sense.  You may need to do some tuning on the size of the Level 1 MRU cache in your particular application.
 
Are you running with transactions turned on?  If so, then I think you want the L1 cache to hold at least the number of objects as you use in a single transaction.
 
- K
 
 
-----------
 
One thing that became clear to me when using a call graph profile of jdbm is
that updates are primarily driven (at least in our application, which is
using the "soft" cache) by insert operations.  Inserts cause records to be
evicted from the internal MRU held by the soft cache, which results in
update requests to the base record manager.
 
 
------------------------------------------------------- SF.Net email is sponsored by: Tame your development challenges with Apache's Geronimo App Server. Download it for free - -and be entered to win a 42" plasma tv or your very own Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php _______________________________________________ Jdbm-developer mailing list Jdbm-developer@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/jdbm-developer