We are using JDBM to store quite an enormous amount of data.
The file size is growing to 1 GB roughly, but the HTree.remove() is not reducing the size of this file. Is there any way we can defrag the file to reclaim unused records.
Most likely, the question you are asking is about packing the database (not defragging). The current implementation of jdbm does not have pack (or defrag) code written (we are actually in the middle of a discussion about how best to do this).
If you are only using a simple HTree, you could probably get the same results by reading each item from the HTree and re-iserting it into a new database.
Just enumerate the keys in the HTree, then pull the value for each key and turn around and insert it into an alternate record manager that has been created for a new file.
Hope this gives you an approach you can use!