From: Legend of t. D. <lot...@ya...> - 2011-12-10 19:27:25
|
Hi, I have a database problem recently, and in looking at the situation we found that the database journal (the data/000000000f.log file) had grown to 44 GB. I think that this is controlled by the following configuration element: <recovery enabled="yes" group-commit="no" journal-dir="data" size="100M" sync-on-commit="no" force-restart="no" consistency-check="yes"/> Usually this works really well, and the files stay roughly below 100 MB. I found the following thread from 2007 on the mailing list where Chris also has a 44 GB log file. In our case, we were attempting to shutdown the database service normally, and it hung for a really long time and then the JVM timed out and the service killed it. On attempting to bring the database back up, we had to do the normal recovery work to get it working again (remove certain dbx files, reindex). Strange coincidence? Any additional date or way to prevent this from happening in the future? With 44 GB, I am glad we did not run out of space on the disk! Thanks, Lothy =================================== QUOTE (http://exist-open.markmail.org/search/?q=huge+journal#query:huge%20journal+page:1+mid:xebtmxw5jzrqdxb4+state:results) =================================== I'm using Exist 1.1.1 newcore. The last 3 days I pumped a lot of data (2.6 GigaBytes) in the database at 4-5 minute intervals using the XMLDB API. However, when I looked into the data directory, I noticed a massive 44 GB log file. Is there something I miss in here? Is there something I can do (like configuration, cleaning up, etc) to rotate the log file such that it becomes smaller? Normally, the journal log will be cleared (replaced by a new one) if it grows beyond 100mb (configurable in conf.xml). This only works for shorter transactions though as the entire transaction needs to be logged in one piece. So if you get a 44GB log file, it normally means you had a really huge transaction. The only operation I know that could generate such a log file is removing a collection. Did you remove a large collection before stopping the db? If not, you have probably hit a bug and it would be interesting to know which operations need to be applied in order to reproduce it. Wolfgang =================================== |