Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

Checkpoint defrag

2013-09-09
2014-01-19
  • Tapan Sharma
    Tapan Sharma
    2013-09-09

    Dear Team,

    I am using HSQL DB 1.8 version.
    I have a class which creates the connection to file database ("Cached table").

    Connection String:

    public static String CONNECTION = "jdbc:hsqldb:file:" + DBFILEPATH + ";shutdown=true"
    

    I have set checkpoint defrag limit to 300 MB.
    Per my understanding as soon as the unused size reached 300MB, HSQL should perform the defrag checkpoint and should reduce the .data file size.

    My module keeps on creating the table using the class and drops it after use. In this way, the unused tables keep on increasing.
    Although the .data file size has reached beyond 400MB the size has not decreased.

    I am assuming that HSQL keeps on running in-memory though there are no current connections open. Is my understanding correct? If yes, could you please tell me what should I do to reduce the .data file size. I don't want to issue the CHECKPOINT DEFRAG command as it would slow down the operation. I want to perform the de-fragmentation as the background thread.

    Regards
    Tapan

     
  • Fred Toussi
    Fred Toussi
    2013-09-09

    With shutdown=true the database is shutdown when the last connection is closed.

    You should perform CHECKPOINT DEFRAG with your code after dropping the table. This will be fast, but you don't have to do it each time you drop the table.

    Please use the latest version if you can.

     
  • Tapan Sharma
    Tapan Sharma
    2013-09-11

    Fred,

    I have created a separate daemon thread which keeps on running and execute "CHECKPOINT DEFRAG" statement after every 15 minutes.
    But for my surprise, it is not cleaning up the unused data.
    I have checked my code both "DROP TABLE" and "CHECKPOINT DEFRAG" statements doesn't throw any exceptions.

     
  • Tapan Sharma
    Tapan Sharma
    2013-09-11

    Thanks. It worked.
    I had to remove "shutdown=true" argument from connection string.
    Now able to reduce the data file size as well as got performance improved.