Kevin,
 
95% of all makeLong() calls are from the fetch and update methods on CacheRecordManager.
 
-bryan
-----Original Message-----
From: Kevin Day [mailto:kevin@trumpetinc.com]
Sent: Thursday, September 22, 2005 3:40 PM
To: Thompson, Bryan B.
Subject: re[4]: [Jdbm-developer] re: Updates driven by insert operatio ns

Bryan-
 
Sounds good.  4% is definitely nothing to sneeze at - but I'm curious about where the java.nio.bits.makeLong() calls are actually being made.  It seems like this would be happening in the disk IO, which shouldn't have anything to do with cache.  Am I missing something?
 
Keep in mind that the soft cache does NOT help you prevent unneeded inserts.  When you call update, it is going to mark the cache entry as dirty.  As you add items to the top of the MRU, the dirty cache entry will fall off the bottom of the MRU, at which point the insert() gets performed.  The softcache then holds on to the non-dirty cache entry in case it is needed again.
 
So, the softcache is going to be ideal for dealing with read performance.  But write performance (if you do multiple updates to the same object inside a single transaction) could still be effected - i.e. you will have multiple inserts when you really could have gotten away with only one.
 
The efficiency hit here is really in the area of serializing the dirty object and putting the contents in the transaction manager's cache (nothing gets written to disk until the transaction ends, so you won't have multiple disk writes where you only need one).  You mentioned that you were seeing a lot of activity in insert(), which is why I brought this up in the first place.
 
If you are only updating your object once per transaction (or making the udpates close enough together that the object won't fall off of the L1 cache), then the entire discussion is null.
 
- K
 
 
 
 
 
> Kevin,
 
I  looked at all the classes that would be impacted by the change from "Object key"  to "long key" -- as you say, this does not seem to be a problem.  However I  see java.nio.bits.makeLong() as account for over 4% of the clock time, and all  of that is jdbm, so I think this is well worth while.
 
I'm  not sure about the twin MRUs.  I am focused on the use of the MRU behind  the softcache, so you never fall off the MRU if there is a hard reference to  the record.  The effect of this really, really depends on your access  patternsin your code.  I'll start thinking about some mechanisms to  characterize "premature" cache evictions (i.e., you evicted a record but then  needed it again within a second or two).  That would be intrusive  instrumentation, but it could be worthwhile when trying to tune the  cache.
 
-bryan

-----Original Message-----
From: Kevin Day  [mailto:kevin@trumpetinc.com]
Sent: Thursday, September 22, 2005  2:37 PM
To: Thompson, Bryan B.
Subject: re[2]:  [Jdbm-developer] re: Updates driven by insert  operations


Bryan-
 

I think it would be safe to adjust the MRU to use  the primitive hashmap - but doing so will change the interface of the  cache...
 
You may want to think real hard about whether you  want to do that or not, as the changes will propogate to a number of classes,  but from my initial look, I don't see any issues.
 
Another option would be to use Long.getValue(),  instead of new Long().  Newer JVMs are pretty good about intelligently  creating Long objects.
 
 
At this stage, I'm just not seeing that as a  performance issue (performance hit is almost entirely from calls to sync()  ).  That's why I've changed my focus to disk usage performance instead of  speed.
 
 
I had a thought on the MRU...  Not sure if  this would work or not, but may be worth a thought.
 
Currently, the MRU does not distinguish between  records that have been updated, and records that have just been read.   This means that if you do 5 updates, then do a lot of reads, then go back to  update those same 5 objects, you'll get two update() hits as the 5 objects are  evicted from the MRU.
 
I don't know if it is even practical to manage  two MRUs - one for items that have been updated, and one for items that have  not been updated.  This would have to be managed in the  CacheRecordManager, because the Caches themselves have no concept of  dirty/not-dirty.
 
 
 
What do you think?
 
- K
 
 
 
  
>Yes,  we run w/  transactions (except when bulk loading data).  I have been  thinking of the MRU size in terms of the tradeoff between the  expected cycle  time until an object is reused and the memory  constraints.  I have intend  to modify the MRU to use the  array-based native long hash map since it has a  fixed size -- that seems to be an optimal choice. -bryan

-----Original  Message-----
From:  jdbm-developer-admin@lists.sourceforge.net   [mailto:jdbm-developer-admin@lists.sourceforge.net]  On Behalf Of Kevin  Day
Sent: Thursday, September 22, 2005 11:46  AM
To:  jdbm-developer@lists.sourceforge.net
Subject:  [Jdbm-developer] re:  Updates driven by insert  operations


Bryan-

Yes - the behavior you  describe would make  sense.  You may need to do some tuning on  the size of the Level 1 MRU  cache in your particular application.  

Are you running with transactions turned  on?   If so, then I think you want the L1 cache to hold at least the   number of objects as you use in a single transaction.  

- K


-----------

One  thing that became clear to me  when using a call graph profile of  jdbm is
that updates are primarily  driven (at least in our  application, which is
using the "soft" cache) by  insert  operations.  Inserts cause records to be
evicted from the   internal MRU held by the soft cache, which results in
update  requests to  the base record manager.  

-------------------------------------------------------   SF.Net email is sponsored by: Tame your development challenges  with Apache's  Geronimo App Server. Download it for free - -and be  entered to win a 42"  plasma tv or your very own Sony(tm)PSP. Click  here to play:  http://sourceforge.net/geronimo.php   _______________________________________________ Jdbm-developer  mailing list  Jdbm-developer@lists.sourceforge.net  https://lists.sourceforge.net/lists/listinfo/jdbm-developer  <
<