Colin was requesting a feature to insert a physical row at a specified location in the file.  It sounds
like that is all he needs to provide a pack feature.  If that is true, perhaps we can write the insert
method and Colin could contribute the rest of the pack feature?
In other news, I am working through the test suite for a page at a time record allocation strategy.
It will buffer serialized records until there are enough to fill up a page and then batch them onto
the page.  I am hoping that this will produce much more efficient installations of records into pages.
This will be an optional feature that can be enabled using RecordManagerOptions.
Once we have this working, it should be a minor change to introduce a clustering feature.
-----Original Message-----
From: []On Behalf Of Kevin Day
Sent: Tuesday, May 30, 2006 1:51 PM
Subject: re[2]: [Jdbm-developer] Free page selection bug

Wanted to float an intermediate idea here - it depends on the database utlization, but may provide a quick out.
The primary issue with the first fit algorithm was that, in certain reasonable usage models, it resulted in massive amounts of wasted space.  This was more a result of the fact that the allocator doesn't support splitting or coallescene than anything.  The downside of the current 'fix' for this issue is that other (quite reasonable) usage scenarios result in a huge amount of wasted CPU cycles (dontcha love these kinds of tradeoffs?).
The ultimate fix for this, of course, is to re-design the allocation strategy completely, and include splitting and coallescing (or have that behavior be implicit to the algorithm).  There has been quite a bit of discussion about strategies in this area, but things move slowly, so I'm not holding my breath for any of them.
It seems to me that the immediate workaround here is to add pack capability to jdbm.  I believe that this could be done, even with the current data structures, by creating a temporary lookup table for reverse mapping physical offsets to recids.  The downside to this approach is that it might require some sort of quiessence in the system to get it right, so if the database is going to be hammered 24x7, then it may not be the right solution - but I thought I'd toss it out.
Here is how the algorithm could work:
The pack operation is triggered
jdbm creates a new, separate jdbm database with transactions turned off, optimized purely for speed (this is a temporary file)
jdbm adds a BTree to act as a reverse lookup table.  Keys are Long, physical offset.  Values are Long, recid
jdbm cranks through the translation table and inserts physical offset->recid mapping into the BTree
jdbm then grabs the first data page, and checks to see if the first record on the page has identical length and used values.  If not, then in a single transaction, it moves the second record to consume all space, updates the length and used value of the first record, lookups up the recid of the second physical record, updates the offset of the second physical record in the translation table, closes transaction
As an additional check, the system should check the physical offset in the current translation table against the physical offset of the actual record.  If they are not the same, then the record shouldn't be moved, and the next record should be considered.
I'm pretty sure that it should be possible to implement this in a manner that allows for partial pack operations.
The database will certainly grow, but the pack operation will shrink it back down.
Anyway, just wanted to get my thoughts out there on this in case anyone wants to run with it.  It should be considerably easier (and less error prone) than implementing a new allocator, and it should provide the basis of a migration path into a new file format when that happens...
If such a system were in place, it would be possible to go back to a first fit algorithm (with significantly better performance across a much wider field of applications), and still (hopefully) keep the fragmentation down.
- Kevin
> Colin,

I am not seeing as much grief from the allocation strategy as you
describe.  I expect that there is an interaction with the allocations
that are being performed by your application.  The change was introduced
since it produced smaller stores for our RDFS database without a loss in
speed, but clearly it is a loose for you.  Let me look at the code and
see what we can change.

With respect to record copying, you should be able to refactor what you
need out of the dump utility.  It has mechanisms for scanning records
from a page perspective, but it does not attempt to do inserts with a
pre-defined logical record id.  I can look into that I suppose, but let's
see where the update (below) gets you.

I've attached a modified version of FreePhysicalRowIdPage.  Please try
it out and let me know what impact it has on your application.  It passes
the jdbm test suite, but there are no performance tests that specifically
target this feature.  It has no performance impact (time or disk space) on
our RDFS database application.


------------------------------------------------------- All the advantages of Linux Managed Hosting--Without the Cost and Risk! Fully trained technicians. The highest number of Red Hat certifications in the hosting industry. Fanatical Support. Click to learn more _______________________________________________ Jdbm-developer mailing list