On 1/22/08, Kevin Day <kevin@trumpetinc.com> wrote:
I'm puzzled by one small part of this (Naveen definitely needs to add purging capabilities to his app, but there's another issue embedded in this question): any reason the file should get corrupted after 2 GB???  This seems like maybe the user is running on FAT32 or some other FS with a 32 bit limitation.  To my knowledge, the record pointers, etc... in jdbm should be able to handle up to 63bits of address space.

Yes, I've grown databases far beyond 2GB without trouble.  I didn't speculate on this since the claim wasn't substantiated.  I don't want to feed the FUD troll.

If the user is on a limited file size FS, then the corruption issue they are running into may be similar to the problem I mentioned a few months back where corruption can occur if the disk runs out of space (this is something that the transaction manager should prevent, but does not currently).
In our app, when jdbm throws an IOException, we kill the entire record manager immediately to make sure that we don't get hit with this corruption.
Ultimately, I think we need a flag in the transaction manager that refuses to commit additional transactions once any transaction fails...  Open to suggestions on that.

I think this approach has merit and the main issue I can see is that other kinds of exceptions may lead a transaction to failure ( e.g. exceptions during serialization) and I'm not sure we'd want to force reinitializing the record manager for that.  I'm thinking mostly in terms of the usability impact of the change.  I like the idea of fail-fast semantic to prevent further corruption.