Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I am working off source code. I needed to make a change to org.hsqldb.lib.tar.DbBackup, making generateBufferBlockValue(File file) public. This is so an embedded client app can restore from a backup, that is passively made for them.
I see no reason for the restriction. If feeling more aggressive, maybe move it right into TarReader class. As it stands now this functionality requires either hardcoding the block value, or copying this code to the app, if not working off source.
Not a big priority, but also not a big change.
Just to keep this with the earlier note, I am not sure whether I am having a problem with doing an embedded restore or hot backup is not working. The process works when BLOCKING. I am doing a checkpoint right before. The back up is done with either:
BACKUP DATABASE TO 'tarFile' NOT BLOCKING
BACKUP DATABASE TO 'tarFile' BLOCKING
The results are:
1 / 5 my_db.properties…
2 / 5 my_db.script…
3 / 5my_db.data…
4 / 5 my_db.backup…
5 / 5 my_db.log…
1 / 3 my_db.properties…
2 / 3 my_db.script…
3 / 3 my_db.data…
Importing always with:
new TarReader(tarFile, TarReader.OVERWRITE_MODE, null, new Integer(DbBackup.generateBufferBlockValue(tarFile) ), destDir).read();
2012-02-17T16:14:50 0* 700 112 - my_db.properties
2012-02-17T16:14:50 0* 700 2685 - my_db.script
2012-02-17T16:14:50 0* 700 786432 - my_db.data
2012-02-17T16:19:08 0* 700 113 - my_db.properties
2012-02-17T16:19:08 0* 700 2685 - my_db.script
2012-02-17T16:19:07 0* 600 524456 - my_db.data
It did not bring in the backup or log files, but then the checkpoint is very close. The problem shows up later when selecting tables that were written very close before backup or maybe after, see below. For this little thing the checkpoint + backup time is only about 0.150 sec, so I am just going to block, but thought you should see this.
2012-02-17T16:23:20.811-0500 WARNING failed to read a byte array
at org.hsqldb.persist.ScaledRAFile.read(Unknown Source)
at org.hsqldb.persist.ScaledRAFile.readInt(Unknown Source)
at org.hsqldb.persist.ScaledRAFileHybrid.readInt(Unknown Source)
at org.hsqldb.persist.DataFileCache.readObject(Unknown Source)
at org.hsqldb.persist.DataFileCache.getFromFile(Unknown Source)
at org.hsqldb.persist.DataFileCache.get(Unknown Source)
at org.hsqldb.persist.RowStoreAVLDisk.get(Unknown Source)
at org.hsqldb.index.NodeAVLDisk.findNode(Unknown Source)
at org.hsqldb.index.NodeAVLDisk.getLeft(Unknown Source)
at org.hsqldb.index.IndexAVL.next(Unknown Source)
at org.hsqldb.index.IndexAVL.next(Unknown Source)
at org.hsqldb.index.IndexAVL$IndexRowIterator.getNextRow(Unknown Source)
at org.hsqldb.RangeVariable$RangeIteratorMain.findNext(Unknown Source)
at org.hsqldb.RangeVariable$RangeIteratorMain.next(Unknown Source)
at org.hsqldb.QuerySpecification.buildResult(Unknown Source)
at org.hsqldb.QuerySpecification.getSingleResult(Unknown Source)
at org.hsqldb.QuerySpecification.getResult(Unknown Source)
at org.hsqldb.StatementQuery.getResult(Unknown Source)
at org.hsqldb.StatementDMQL.execute(Unknown Source)
at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
at org.hsqldb.Session.executeDirectStatement(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.executeQuery(Unknown Source)
Please open separate threads for separate issues.
Re. #1: If Fred has no objection, I'll make the generateBufferBlockValue method public.
Re. #2, some tips to help isolate the issue.
I'd stick with blocking and get that working perfectly, as you are sort of doing, then we can work on any non-blocking issues.
The blocking backup process backs up a synced database and therefore there will be no log file to back up.
It would be counter-productive to back up back up files, especially since the database is in fully synced state.
Try to use a clean and new restoration of the backup without your app, thereby eliminating app-integration issues. You can use SqlTool or DatabaseManager or whatever to validate the restored database.
To eliminate TarReader as a possible source of problems, use just tar to extract.. If you are on Windows, I recommend msysgit from http://code.google.com/p/msysgit/downloads/list. Real GNU tar is in the bin subdirectory. Or 7-Zip. With zipped 7-Zip tars you have to extract twice, one for ungz and one for tar extract. These are speculative recommendations since I can't remember the last time I extracted these backups on Windows.
Well, public access would be a good idea if it helps the user with embedding in apps.
Re restoring hot backup sets using our offline backup tool: The output you report indicates our tool is not restoring the .backup and .log file which are part of the hot backup set. I will look into this issue later. In the meantime, try restoring the hot backup set using an external gzip / unzip tool.
I double clicked the Windows made tar.gz on OSX and it expanded to only 3 files, so best guess is the problem is in the backup side, as opposed to the restore.
Correct me if I'm wrong, but what I wrote earlier is how I recollect it- that when I implemented cold backup I did not store .log or backup files because the former will not be present after the internal procedure does the explicit sync; and the backup does not need a nested backup (even if the backup file is present). I don't know if anybody else did, but I never modified what was shared when hot-backup was implemented, and I was not involved with any testing of hot backup.
The problems with hot backup was fixed in March and committed to SVN. Will be featured in the next release.
Good! Turns out my attempt to use it works just as well by blocking, since the DB was so small. Am currently freezing all unnecessary changes, but hope the fix will help someone else.
It certainly looked like it worked before, and difficult to cause the conditions which would produce a failure.