Menu

.script file growth during SHUTDOWN COMPACT

Help
2002-10-01
2014-01-19
  • Nobody/Anonymous

    Hi all!
    I've a quite big cached database, with a .data file around 400 Mb and a .backup file around 100Mb.
    I wish to do a SHUTDOWN COMPACT, but the script file grows extremely until (after 6 hours ;)) HD space runs out... noticed the script file was around...... 44 GB!
    Is there a workaround?
    Thanks in advance!

     
    • Fred Toussi

      Fred Toussi - 2002-10-01

      This is strange. The size of the script file should not get more than 4 times the .data file. It is usually smaller that this. 4 times size is only reached when most of the data is in BINARY fields.

      You can use the SCRIPT 'filename' and see if it produces a smaller script. This should not take more than 30 minutes on an average PC.

       
    • Nobody/Anonymous

      Hi fred, thanks for your reply. I tried the command, but after 30 minutes (I've an Athlon 1500 with 256 Mb DDR) the file is around 2,55 Gb.
      Seems no change...
      I've also noticed another problem with the database:
      I've a timestamp as a primary key. It has around 10 years, with a record around every 3 hours... I tried a query (from 31/12/1992 and 01/02/1993) which should return about 250 records... and after some minutes of elaboration... out of memory error!
      Could be the problem that the database was at first in hsqldb 1.6 and transferred "at hands" to hsqldb 1.7?

       
      • Fred Toussi

        Fred Toussi - 2002-10-01

        I don't know what caused the problem but it seems there is some corruption. Can you check the contents of the script dump that you just did, and see if it has INSERT statements that correspond with the records in your database.

         
    • Nobody/Anonymous

      Don't know how to make the computer open a 2,55 Gb file in a reasonable time :)
      I'm however trying to reimport the data from scratch...

       
    • Nobody/Anonymous

      Hi fred,
      I was trying to reimport the data, but I've encountered some "OutOfMemory" error.
      could it be my code, o I'm misusing hsqldb, or sql... or could be hsqldb that runs out of memory?
      I'm importing from text data in a table with a timestamp as primary key, and a column yet filled. Now I've to import the other columns, but, as there could be new Timestamps, I've to try to insert and, if unsuccesful, update the database. My procedure is (Sorry if not indented):
      try {
      // Insert a new row
      database.execute("INSERT INTO TABLE_A VALUES ('"+Instant+"',"+Value+",NULL)");
      }
      catch (SQLException inserting) {
      // If I had an Exception on inserting the data, try to update
      try {
      database.execute("UPDATE TABLE_A SET VV="+Value+" WHERE Recording='"+Instant+"'");
      }
      catch (SQLException update) {
      // Otherwise do nothing
      }
      }
      Don't know if I'm using an incorrect way to reach the goal, and also if it could make hsqldb hangs and runs out of memory... thanks for your kind help again!

       
      • Fred Toussi

        Fred Toussi - 2002-10-03

        If the table you are inserting into is a cached table, you should either reduce the cache_scale property or increase the maximum memory available to the Java virtual machine.

        Unless your update affects very many rows, it should not take up a lot of memory.

         
    • Nobody/Anonymous

      I tried to increase the available memory but nothing positive happens.
      Don't know how to reduce the cache_scale property :)
      I think it should be the database, because the first task (when I insert new rows) runs without troubles... but when it comes the update... it hangs every time after ~42000 rows updated.
      Don't know if it means "very many rows".
      Thanks again for any help

       
      • Fred Toussi

        Fred Toussi - 2002-10-05

        Re cache_scale, try this with 1.7.1 RC4. After creating the DB, shutdown the DB and edit the database.properties file and change the line below to use a value less than 15.

        hsqldb.cache_scale=14

        You can reduce it further to 13 or 12 if necessary.

        By 'updating very many rows at the same time', I refer to single queries that update very many rows, such as an update query without a where clause that updates all the rows in a table.

        Regarding increasing available memory, make sure you are using the correct setting for maximum memory for the JVM when you use the JAVA ... command.

         
  • Steve T

    Steve T - 2012-12-06

    We're seeing this now with a large (6+ GB) data file that we are trying to CHECKPOINT DEFRAG or SHUTDOWN COMPACT.  The DEFRAG exits with a severe error and the COMPACT results in an ever-growing (apparently runaway) script.new file - today I freed up a bunch of space and had one SHUTDOWN COMPACT produce a 100 GB file.  In a couple of posts here, you've indicated that there is likely database corruption. Is there a way to recover from this scenario?  We're running version 2.0.0. Any tools that might help, or anything to look for?

    Thanks,

    Steve

     
  • Fred Toussi

    Fred Toussi - 2012-12-06

    This is obviously related. We will provide a utility for recovery from this type of issue.

    In the meantime, you can find which table has the problem. If the table has an index apart from its PK, you could try a query such as select count(*) from indexed_column where indexed_column < x and adjust x to find the faulty row. It may even read all the rows.

     

Log in to post a comment.