HTree memory not released

Sarav S
  • Sarav S

    Sarav S - 2009-12-08

    We are using JDBM HTree as a cache to store our custom objects , there is a job which checks for the update and refresh the custom objects in the cache.

    we except around 200 k objects in the cache and all of this will be written to the file and will not be in the memory , but when i am looking in to the heap dump i see around 80 k objects in memory and are referenced from 'jdbm.htree.HashBucket'

    we have transaction enabled and commit the transaction after every put to the cache. Is there a memory leak in the HTree ? or i am missing some configuration ? can someone help me. Thanks


  • Kevin Day

    Kevin Day - 2009-12-08

    Do you have record caching turned on?  By default, an MRU record cache is used with 1000 entries.  Remember that the object cache is caching the low level objects (the HashBucket objects), not the values you are storing in the hash buckets.

    Try running with caching turned completely off and see what results you get.

  • Sarav S

    Sarav S - 2009-12-08

    Thanks for the reply.

    I tried RecordManagerOptions.CACHE_SIZE = 0 got an error while starting the app and changed to RecordManagerOptions.CACHE_SIZE = 10 . But, still i see that  the objects added to the JDBM are seen in the heap dump.

  • Kevin Day

    Kevin Day - 2009-12-08

    I run the following test and monitor with a heap space analyzer.  I see the heap growing and contracting - nothing out of control.  I also tested with the default cache enabled, and it performs similarly (the peak profile is different, natch, but no unbounded heap growth).

    If the HTree were somehow responsible for the objects you are seeing in your heap dump, then this test should show a memory leak.

    Can you put together a similar test that shows your issue?

    If you are using a heap space analyzer, can you please take a look at the referants on your objects and see if any of the referants are jdbm objects?

         * Created on Dec 8, 2009
         * (C) 2009 Trumpet, Inc.
        package jdbm.htree;
        import java.util.Properties;
        import jdbm.RecordManager;
        import jdbm.RecordManagerFactory;
        import jdbm.RecordManagerOptions;
        import jdbm.helper.FastIterator;
        import jdbm.recman.TestCaseWithTestFile;
        import org.junit.After;
        import org.junit.Before;
         * @author kevin
        public class TestBigInsert  extends TestCaseWithTestFile{
            public void setUp() throws Exception {
            public void tearDown() throws Exception {
            public void testLotsOfInsertions() throws IOException {
                Properties props = new Properties();
                props.setProperty(RecordManagerOptions.CACHE_TYPE, RecordManagerOptions.NO_CACHE);
                RecordManager recman = RecordManagerFactory.createRecordManager( TestCaseWithTestFile.testFileName, props );
                HTree testTree = getHtree(recman, "htree");
                int total = Integer.MAX_VALUE;
                for ( int i = 0; i < total; i++ ) {
                    testTree.put( Long.valueOf(i), Long.valueOf(i) );
                    if (i % 10000 == 0){
                        System.out.println("Free mem = " + Runtime.getRuntime().freeMemory());
            private static HTree getHtree( RecordManager recman, String name )
            throws IOException
              long recId = recman.getNamedObject("htree"); 
              HTree testTree;
              if ( recId != 0 ) {
                  testTree = HTree.load( recman, recId );
              } else {
                  testTree = HTree.createInstance( recman );
                  recman.setNamedObject( "htree", testTree.getRecid() );
              return testTree;

  • Sarav S

    Sarav S - 2009-12-08

    I am using ver 1.0 , and not able to see RecordManagerOptions.NO_CACHE option.  can you tell me which version you are using ? thanks

  • Kevin Day

    Kevin Day - 2009-12-08

    PS - if you want to work with the 1.0 code, the test I posted above will work just fine if you don't add the NO_CACHE option - just comment out that line, and the heap still doesn't grow out of control.  Highly unlikely that there is a memory leak in the HTree.

    If you can put together a similar test case that demonstrates your issue, I'd be happy to take a look at the code and see if anything pops out that you might be doing wrong.

  • Sarav S

    Sarav S - 2009-12-09

    Thanks for the reply ,  i ran your test case in my local machine , i don't see the heap growing out of control , but i am trying to find the reason as to why there is reference to objects in the heap at all .

    my understanding is that the objects will be in the file and heap will have only reference / pointer to the object in file .

  • Kevin Day

    Kevin Day - 2009-12-09

    ok - so no memory leak.  So now the question is just about why objects are stored in the L1 or L2 cache in jdbm?  The answer to that one is fairly straightforward:  It sounds like your perception of what jdbm is might be a bit off.

    jdbm is a full featured, embedded, object database.  It has many features to enhance performance, including relatively advanced caching behavior.

    So, jdbm can be used as a persistent cache like you describe, by default, it's going to do everything it can to work fast - and that means keeping objects around if it thinks they might be used again.

    You can control these performance optimizations, including disabling the record cache entirely (that's where the NO_CACHE value comes into play).

    In addition to the L1 and L2 record cache (which is where the object references you are seeing are most likely coming from), there is additional caching occurring at the block level (these will appear as byte arrays if you do a heap dump).

    To provide some further clarification, the record level caches hold on to record objects in jdbm.  Each page in the HTree is a record object, and each page can contain a largish number of references to your objects (the maximum, I think, is set to 100 entries - but I'm not positive on that).  So, if the L1 cache is in effect (this holds on to the last 1000 record objects used), you could have a theoretical max of 100,000 of your objects still held in cache.  If the load factor in the hashing algorithm is 0.8, then you have about 80K objects effectively held in cache.

    When you dropped the cache size to 10, I would have expected this number to drop to 800 objects (10x100x0.8).

    If you disable the cache entirely, then you shouldn't see referents.

    I'm not sure what behavior you are seeing when you drop the cache size down, but if it's within an order of magnitude of the numbers I'm showing above, then the system is working as designed.


Log in to post a comment.