We have a use case where we would like a bunch of JVMs on the same machine to load data from a common (j)dbm file. We would like to use shared memory for this (which is provided by Java FileChannel api calls). This will prevent memory bloat.
We looked at the code a bit - and it seems that this should be possible by:
- changing RecordFile to be an interface (or base class)
- having an implementation (lets call it SharedFile) that uses a mmaped segment for reading/writing from the underlying file (instead of using a RandomAccessFile).
In read-only, shared-memory mode - we would size the in-memory hash table (cache) to be fairly small - and would fall back to the SharedFile for paging in data that's not in the cache. This gives us a perfect combination of process heap memory for caching very frequent objects inside the process and using shared memory for everything else.
We can probably take a stab at implementing this ourselves - but would like to get some feedback on the feasibility of this approach and any thoughts/comments.