I have been profiling jdbm for our application and I have identified a hotspot with jdbm.recman.PhysicalRowIdManager.alloc(), which handles (re-)allocation requests for the store. The main issue is that jdbm performs a linear scan for the first deleted record large enough for the recallocation request. Based on a call graph profile of our application, this scans an average of 291 slots (deleted physical records) in a page and then 50% of the time fails to find a suitable pre-existing record. Further, since jdbm chooses the first deleted record which is large enough (it stores the capacity of the record separately from the used size of the record), there is a tendency to waste space by reusing large records for small allocations.
I have some ideas about how to fix this from optimizing the loop over the slots in the page, to caching the size of the largest record available for (re-)allocation, to a different allocation strategy based on multiple free lists in which each record is of a known size. Those options are ordered pretty much from easy to hard as well as in terms of least to most expected payoff.
Before I get starting modifying the code, I would like to get a better idea of how the jdbm allocation policy is performing for our application, in particular I would like to be able to dump the list of deleted records available for reallocation as well as the amount of wasted space. I've looked at the code base but I did not see anything explicitly oriented towards this.
Does anyone have a utility for dumping a jdbm store? If not, I will make writing one my first step.
I've already found this problem and come up with a solution. The FreeLogicalRowIdPage class has the fix in it - I asked for feedback awhile back before I made similar changes to FreePhysicalRowIdPage, but I never heard back from anyone.
I found that, with transactions turned off, the code change improved insertion performance dramatically (several hundred percent).
Please take a look at the code changes in the last submit for FreeLogicalRowIdPage and give me any feedback. If it's good for you, I'll make the same adjustements to FreePhysicalRowIdPage and you should see the hotspot go away.
I've started looking at the FreePhysicalRowIdPage class. The changes that you made to FreeLogicalRowIdPage do not carry over directly since the test is not for an "allocated" slot, but for one which is allocated and where the record has at least a given size.
I tried out the same method anyway, but it results in both more time (slower execution) and more space (larger store) since it is skipping over allocated slots for free physical records which do not satisify the current request, but which might satisify a subsequent request.
I'm going to think about some alternative approaches that we might try here. For example, maintaining a more complex transient data structure than the two counters you added to FreeLogicalRowIdPage or reordering the slots so that the slots are dense and sorted by size, in which case we could just use a binary search for both getFirstFree() and getFirstGreaterThan(int size). We could re-order when slots are freed or allocated.
There may be other solutions as well, but I would like to resolve this bottleneck.
On reading in more detail, I can see that my fix to FreePhysicalRowIdPage may not address all of your questions.
The other question, about how jdbm allocates records, is directly in line with a question I emailed to the developers listserv late last week. I've put together a proposal for one way of addressing this (allowing splitting of large freed records and merging of adjacent smaller records) - I'll email it to the listserv tomorrow when I get a chance.
It's tricky, because of some limitations in ability to reverse map a physical to logical row ID, but I think I've come up with a workable solution that doesn't add a lot of complexity.
Also, I wanted to make a couple of points about my changes to FreePhysicalRowIdPage:
1. The change I made is not perfect - it is a quick hack, but one that reduces about 99% of the CPU overhead from the linear search for available slots in the lookup tables.
The "correct" way to do this would be to treat the available slots as a linked list, with each open slot pointing to the next open slot. The block offset could be set to a negative to indicate that the slot is available (currently it is set to 0).
The location of the first available slot would then be stored in the first record of the first page of the lookup table.
This would have required a change in the binary format of the jdbm .db file, so I opted for the less elegant, but almost as effective mechanism that is currently commited.
2. The current strategy (hack) is effective because the pages containing the lookup tables wind up in cache pretty quickly. We then store the offset of the last found open slot on that page (in the cache). From that point forward, the linear search is actually doing a single query on each full page, so we really wind up just walking the page list looking for one that has a slot available.
This won't technically scale all that well, but a page holds a heck of a lot of entries (I think it works out to 14 bytes per entry, so each page will have about 575 entries in it). I think for it to be a problem that you'd have to have enough entries in the free lists to consume enough pages that they would be evicted from the cache, in which case you'd be back to a full linear search of each page, each time around.
What do you think about moving the remainder of this discussion to the listserv to make it easier for Alex to join in?
Log in to post a comment.