Just to be clear, the issue here is really about the size of your
transactions, not the size of the database. JDBM lets you access
extremely large databases (ie. multi-gigabyte) without using much memory
(other than the user-configurable cache).
The problem lies in JDBM's transaction manager, which currently holds
all modified objects [actually, dirty pages] in memory during the
transaction. This is to simplify the design and improve performance.
Changing this behavior is not a trivial task, but if you're willing to
learn the internal of the jdbm.recman package, and more specifically the
TransactionManager you could add support for logging transactions to
disk and later replay the transaction log from the file during commit.
I can provide more guidance if you're committed to getting this done.
Andreas Harth wrote:
> I am happy user of JDBM, great piece of code. I have used JDBM together
> with some parser and server code for an RDF repository that's accessible
> over HTTP.
> I get OutOfMemory exceptions if I try to process files (one transaction)
> that don't fit into the 64 MB the JVM is allowed to consume. I could
> increase JVM's memory using -Xmx, but I will have to process files that
> won't fit into main memory (hunderts of MB).
> It seems that JDBM handles one transaction in main memory, and stores the
> data on disk after the commit(). Is it possible to do the transaction
> buffering direclty on disk? Where should I look to implement that and
> what would be the best strategy to do so?
> PS. I am using a CVS checkout from about two weeks ago.