Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I have a huge hashtable persisted using JDBM. I open this hashtabe and start reading data from the hashtable, i notice that there seems to be almost one disk read for each get call.
Is it possible to control that certain no. of loaded page be in memory before getting discarded.
Also it would be good to preload certain amount of data, since today we have to wait for certain amount of requests to really have a usefull cache built.
I understand that some kind of caching is provided by the ObjectCache, but controlling the mem based on the number of objects is difficult.
While inserting data, i am able to control the mem by calling commit after inserting certain amount of data.