On the IDE stuff -- I've seen Eclipse do stuff like that many times as well -- an unavoidable consequence of multiple compiler implementations, etc.

I've used sqlite far more than Access (the latter only on one project another lifetime ago).  Don't want to spread rumors about myself, etc. ;-)

Yes, I'm already calling .synchronizeLog, with good results (and traversing down through however many interstitial RecordManagers happen to be created for me to get to the BaseRecordManager).

Having an option to obtain this behaviour automatically is a good thought.  I'll definitely put that on the todo list.  Right now, I'm finishing up a Lucene Directory implementation on top of jdbm (which I'll probably be putting up on github eventually, along with a clojure wrapper library).  Once I'm done there, I'll take a look at the max transaction configuration.

Thanks,

- Chas


On Aug 14, 2009, at 1:47 PM, Kevin Day wrote:

I've applied the changes from your patch to SVN (who knows what NB is doing - but the changes are benign, and if it makes things a bit easier for you, then that works for me).
 
 
 
On the db vs lg file thing - I kind of had a suspicion on that.  As a total aside: MS Access has the log inside a region of the mdb file itself (the ldb is actually just a lock file - not a log).
 
Note that if jdbm is shut down *properly* (no crashing), then the log file is not needed and all data will be in the database.  If you don't care about durability, you could run without the transaction manager, and you won't have a log file at all.
 
If you want to manually sync the log, I suggest that you get a hold of the transaction manager, and call it directly (recMan.getTransactionManager().synchronizeLog().  I'm pretty sure that there's no need to add methods anywhere.
 
 
However, what I really think you are asking for can be achieved by only allowing a single transaction in the log.  This would trigger a sync for every transaction (it would be a lot slower than storing multiple transactions in the log - but it will have the same performance characteristics as if you called synchronizeLog() manually every time to called commit() ).  There is a variable in TransactionManager (_maxTxns) that controls this behavior.  The default is 10.  If you set it to 1, I'll bet that you get the behavior you are going for.  You can set the value using TransactionManager#setMaximumTransactionsInLog().
 
To do this right, the place to enhance jdbm for this sort of thing would be to add a new option property to RecordManagerOptions, and add behavior to Provider#createRecordManager() that calls TransactionManager#setMaximumTransactionsInLog() if the property is specified.  If you'd like to take a crack at that, I'd be happy to review and apply a patch.  Ideally, also include a unit test that confirms the behavior when the property is provided...  Not sure about the best way to write that test - I'll leave that to your imagination :-)
 
Please let me know what you think!
 
- K
 
----------------------- Original Message -----------------------
  
From: Chas Emerick <cemerick@snowtide.com>
Cc: 
Date: Thu, 13 Aug 2009 18:58:51 -0400
Subject: Re: [Jdbm-general] commit() semantics, log vs. database files
  
I guess I somehow got it in my head that jdbm was focussed on having a  
single-file datastore (likely not based on its docs, etc., but  
chattering message boards, etc).

The notion of log files being only for transient storage is pretty  
consistent in some of the embedded DBs I happen to have used (sqlite  
and [god help me] access), although obviously derby has a multitude of  
files.

The use case is simply that I'm looking at jdbm to provide a user- 
facing datastore, so having a single file is a big deal in terms of  
ease of backup, user familiarity, etc.  Having the synchronizeLog  
method there certainly gets the job done.

Regarding the compile errors, they're only in the NetBeans IDE (v6.7  
FWIW), not in the actual build (I think NB is just being a little  
pickier about generics stuff than javac is).  I've attached a patch  
file from svn diff (not sure why it produced such a noisy diff on the  
first change...).

Thanks,

- Chas

On Aug 13, 2009, at 5:54 PM, Kevin Day wrote:

> I'm puzzled about the perceived importance of writing stuff to the  
> db file...  commit() in berkleydb, for example, certainly doesn't  
> flush the write from the log to the data file - commit is what  
> actually writes to the log (heck, berkleydb uses way more than 2  
> files).  Same goes for non-embedded dbs like MySQL, SQL Server,  
> etc...  Whether data gets written to file A or file B really  
> shouldn't matter, and I don't think it would be a good idea for a  
> client application to be trying to determine when data should be  
> moved from file B into file A.  Here's another way to think about  
> it:  the .db and .lg files together comprise the persistent database  > store.  You can't use one without the other.  How data is  
> transferred between them is an implementation detail of the database  
> - not something a client app should be involved in at all.
>
> What specific use case do you have in mind where it would be  
> appropriate or advantageous for the client code to force the  
> database to move data from the log file into the db file?  Perhaps  
> if I can understand the use-case, I can provide some more  
> clarification.
>
>
> I'm not seeing any compile errors in the HEAD code from SVN.  Are  
> you looking at the correct repository?  https://jdbm.svn.sourceforge.net/svnroot/jdbm/trunk
>
> - K



------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july

_______________________________________________
Jdbm-general mailing list
Jdbm-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jdbm-general