From: Chas E. <cem...@sn...> - 2009-11-05 15:18:27
|
I'm seeing some sporadic errors with larger data loads / database sizes (e.g. .db file > 600MB or so), though that's only correlative at this point. The most common one is this: java.lang.Error: double get for block 0 at jdbm.recman.RecordFile.get(RecordFile.java:466) at jdbm.recman.PhysicalRowIdManager.fetch(PhysicalRowIdManager.java: 213) at jdbm.recman.BaseRecordManager.fetch(BaseRecordManager.java:594) at jdbm.recman.CacheRecordManager.fetch(CacheRecordManager.java:313) at jdbm.recman.CacheRecordManager.fetch(CacheRecordManager.java:293) at jdbm.htree.HashDirectory$HDIterator.prepareNext(HashDirectory.java: 543) at jdbm.htree.HashDirectory$HDIterator.prepareNext(HashDirectory.java: 556) at jdbm.htree.HashDirectory$HDIterator.prepareNext(HashDirectory.java: 556) at jdbm.htree.HashDirectory$HDIterator.next(HashDirectory.java:499) java.lang.Error: double get for block 13689 at jdbm.recman.RecordFile.get (RecordFile.java:466) jdbm.recman.PhysicalRowIdManager.fetch (PhysicalRowIdManager.java: 177) jdbm.recman.BaseRecordManager.fetch (BaseRecordManager.java:594) jdbm.recman.CacheRecordManager.fetch (CacheRecordManager.java:313) jdbm.recman.CacheRecordManager.fetch (CacheRecordManager.java:293) jdbm.htree.HashDirectory.get (HashDirectory.java:197) jdbm.htree.HashDirectory.get (HashDirectory.java:204) jdbm.htree.HashDirectory.get (HashDirectory.java:204) jdbm.htree.HTree.get (HTree.java:142) The latter is a straightforward HTree.get that fails. The former is where the application is simply gathering all of the keys in the HTree into a set -- there's no interleaved mutating operations, and only one iterator is in play. The app is a single-threaded process in both cases. FWIW, in the second case I happened to have a breakpoint set so as to inspect things if/when this exception occurred again, and I saw that only two entries were in the RecordFile's inUse collection, with keys 0 and 13689. After digging around a bit, it seems like things like this shouldn't be possible if the CacheRecordManager's MRU is doing its job properly (i.e. persisting blocks that roll off the cache so they're not "in use" at the RecordFile level anymore when the next fetch occurs and causes a cache miss). Does that seem reasonable, or am I off on the wrong track? I've also seen a number of Java deserialization errors in the same circumstances (large database sizes), but I unfortunately lost those stack traces. I hope to provoke them again shortly. Thanks, - Chas |