Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

Cloudscape becomes open-source

Binh Pham
2004-08-04
2014-01-19
  • Binh Pham
    Binh Pham
    2004-08-04

    Given that IBM is donating this relational database
    written in Java to Apache, what is your thought
    on Cloudscape vs hsqldb? Whom is having the upper hand?

    Thanks,

    Binh

     
    • Fred Toussi
      Fred Toussi
      2004-08-07

      As things stand, IMO, we won. It just proves the current dominant position of HSQLDB as _the_ java database engine -- open or closed source.

      Anyway, this forum is for discussion of HSQLDB development issues. Please repost your message in the hsqldb-user mailing list where people can comment on relative merits of each product.

       
    • Finally got around to downloading and trying out Derby today.

      Here's my result, using the basic Julia, Peterson-Clancy DatabaseManager utility test script:

      Derby:

      ms     count sql                                                                                                            error                                                                               
      ------ ----- -------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
      n/a    n/a   DROP TABLE Test                                                                                                SQL Exception: 'DROP TABLE' cannot be performed on 'TEST' because it does not exist.
      50     1000  CREATE TABLE Test(  Id INTEGER NOT NULL PRIMARY KEY,  FirstName VARCHAR(20),  Name VARCHAR(50),  ZIP INTEGER)                                                                                      
      110    1000  create unique index idx1 on Test(id)                                                                                                                                                               
      31310  1000  INSERT INTO Test   VALUES(#,'Julia','Peterson-Clancy',#)                                                                                                                                           
      49050  1000  UPDATE Test SET Name='Hans' WHERE Id=#                                                                                                                                                             
      40700  1000  SELECT * FROM Test WHERE Id=#                                                                                                                                                                      
      43170  1000  DELETE FROM Test WHERE Id=#                                                                                                                                                                        
      110    1000  DROP TABLE Test                                                                                                                                                                                    
      164500 total

      HSQLDB 1.7.2

      ms   count sql                                                                                                   error                                                                       
      ---- ----- ----------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------
      n/a  n/a   DROP TABLE Test                                                                                       java.sql.SQLException: Table not found: TEST in statement [DROP TABLE Test ]
      0    1000  CREATE TABLE Test(  Id INTEGER PRIMARY KEY,  FirstName VARCHAR(20),  Name VARCHAR(50),  ZIP INTEGER)                                                                              
      550  1000  INSERT INTO Test   VALUES(#,'Julia','Peterson-Clancy',#)                                                                                                                          
      990  1000  UPDATE Test SET Name='Hans' WHERE Id=#                                                                                                                                            
      2030 1000  SELECT * FROM Test WHERE Id=#                                                                                                                                                     
      550  1000  DELETE FROM Test WHERE Id=#                                                                                                                                                       
      0    1000  DROP TABLE Test                                                                                                                                                                   
      4120 total

      I'll try to get around to running a test using heavily reused prepared statements, as the derby docs say the parse/plan generation overhead is quite high.

       
    • Forgot to state:

      same machine, both results after 10 "warm up" runs, both using "embedded" (in-process) mode operation.

      older machine (PII 266, 256 MB RAM, 5400 spin drive, Windows 98, JDK 1.4.2_06).

       
    • As promised, I found the time today to get around to running hsqldb/derby (cloudscape 10.0) comparative JDBCBench, a sample implementation of the Transaction Processing Performance Council Benchmark B coded in Java and ANSI SQL2, by Mark Matthews.

      This time, tests were run using this:

      java version "1.4.2_04"
      Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_04-b05)
      Java HotSpot(TM) Client VM (build 1.4.2_04-b05, mixed mode)

      OS Name    Microsoft Windows XP Professional
      Version    5.1.2600 Service Pack 1 Build 2600
      OS Manufacturer    Microsoft Corporation
      System Manufacturer    IBM
      System Model    8187KEU
      System Type    X86-based PC
      Processor    x86 Family 15 Model 2 Stepping 9 GenuineIntel ~2793 Mhz
      BIOS Version/Date    IBM 2AKT34AUS, 11/21/03
      SMBIOS Version    2.31
      Windows Directory    C:\WINDOWS
      System Directory    C:\WINDOWS\System32
      Boot Device    \Device\HarddiskVolume1
      Locale    Canada
      Hardware Abstraction Layer    Version = "5.1.2600.1106 (xpsp1.020828-1920)"
      Total Physical Memory    1,024.00 MB
      Available Physical Memory    533.87 MB
      Total Virtual Memory    2.63 GB
      Available Virtual Memory    1.84 GB
      Page File Space    1.64 GB
      Page File    C:\pagefile.sys

      Here's the output:

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>runBench

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CLOUDSCAPE_INSTALL=c:\java\servers\cloudscape\10.0

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CP=c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbytools.jar;
      c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>java -classpath c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbytools.jar;c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar org.hsqldb.test.JDBCBench -driver org.apache.derby.jdbc.EmbeddedDriver -url jdbc:derby:testDb -user test -password test -clients 2 -tpc 1000 -init
      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.apache.derby.jdbc.EmbeddedDriver
      URL:jdbc:derby:testDb

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000

      Start: Wed Nov 24 14:55:40 CST 2004
      Initializing dataset...DBMS: Apache Derby
      In transaction mode
      Already initialized
      done.

      Complete: Wed Nov 24 14:55:41 CST 2004
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 51.344 seconds.
      Max/Min memory usage: 19238352 / 9582064 kb
      0 / 2000 failed to complete.
      Transaction rate: 38.9529448426301 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 46.423 seconds.
      Max/Min memory usage: 14465240 / 7504320 kb
      0 / 2000 failed to complete.
      Transaction rate: 43.082092928074445 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 4.125 seconds.
      Max/Min memory usage: 11610336 / 8323048 kb
      0 / 2000 failed to complete.
      Transaction rate: 484.8484848484849 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 3.375 seconds.
      Max/Min memory usage: 10746864 / 7195616 kb
      0 / 2000 failed to complete.
      Transaction rate: 592.5925925925926 txn/sec.

      //-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>runBenchHsql

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CLOUDSCAPE_INSTALL=c:\java\servers\cloudscape\10.0

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CP=c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbytools.jar;c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>java -classpath c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbytools.jar;c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar org.hsqldb.test.JDBCBench -driver org.hsqldb.jdbcDriver -url jdbc:hsqldb:testDb -user sa -clients 2 -tpc 1000 -init
      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:testDb

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000

      Start: Wed Nov 24 14:55:08 CST 2004
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Wed Nov 24 14:55:08 CST 2004
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 1.64 seconds.
      Max/Min memory usage: 10156976 / 7922096 kb
      0 / 2000 failed to complete.
      Transaction rate: 1219.5121951219512 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 1.063 seconds.
      Max/Min memory usage: 10021720 / 7822928 kb
      0 / 2000 failed to complete.
      Transaction rate: 1881.4675446848544 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 0.391 seconds.
      Max/Min memory usage: 9578088 / 8463984 kb
      0 / 2000 failed to complete.
      Transaction rate: 5115.089514066496 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 0.468 seconds.
      Max/Min memory usage: 10936976 / 8917648 kb
      0 / 2000 failed to complete.
      Transaction rate: 4273.504273504273 txn/sec.

       
    • Just to be fair, I thought I'd run the tests again, comparing performance at 100x greater concurrency factor (200 concurrent clients, instead of 2).

      //-----------------------------------------

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CLOUDSCAPE_INSTALL=c:\java\servers\cloudscape\10.0

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CP=c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbytools.jar;
      c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>java -classpath c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbyt
      ools.jar;c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar org.hsqldb.test.JDBCBench -driver org.apache.derby.jdbc.EmbeddedDriver -url jdbc:derby:testDb -user
      test -password test -clients 200 -tpc 10 -init
      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.apache.derby.jdbc.EmbeddedDriver
      URL:jdbc:derby:testDb

      Scale factor value: 1
      Number of clients: 200
      Number of transactions per client: 10

      Start: Wed Nov 24 15:28:14 CST 2004
      Initializing dataset...DBMS: Apache Derby
      In transaction mode
      Already initialized
      done.

      Complete: Wed Nov 24 15:28:15 CST 2004
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 129.237 seconds.
      Max/Min memory usage: 41953592 / 12457336 kb
      0 / 2000 failed to complete.
      Transaction rate: 15.475444338695576 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 118.179 seconds.
      Max/Min memory usage: 42143184 / 13068008 kb
      0 / 2000 failed to complete.
      Transaction rate: 16.923480482996133 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 5.203 seconds.
      Max/Min memory usage: 24964608 / 8853456 kb
      0 / 2000 failed to complete.
      Transaction rate: 384.3936190659235 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 5.953 seconds.
      Max/Min memory usage: 34091528 / 13008080 kb
      0 / 2000 failed to complete.
      Transaction rate: 335.9650596337981 txn/sec.

      //-----------------------------------------------

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CLOUDSCAPE_INSTALL=c:\java\servers\cloudscape\10.0

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>set CP=c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbytools.jar;
      c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar

      C:\java\servers\cloudscape\10.0\frameworks\embedded\bin>java -classpath c:\java\servers\cloudscape\10.0\lib\derby.jar;c:\java\servers\cloudscape\10.0\lib\derbyt
      ools.jar;c:\java\servers\cloudscape\10.0\lib\hsqldbtest.jar org.hsqldb.test.JDBCBench -driver org.hsqldb.jdbcDriver -url jdbc:hsqldb:testDb -user sa -clients 20
      0 -tpc 10 -init
      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:testDb

      Scale factor value: 1
      Number of clients: 200
      Number of transactions per client: 10

      Start: Wed Nov 24 15:27:53 CST 2004
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Wed Nov 24 15:27:54 CST 2004
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 2.187 seconds.
      Max/Min memory usage: 11225488 / 7979792 kb
      0 / 2000 failed to complete.
      Transaction rate: 914.4947416552355 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 1.984 seconds.
      Max/Min memory usage: 10475376 / 7939912 kb
      0 / 2000 failed to complete.
      Transaction rate: 1008.0645161290323 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 0.859 seconds.
      Max/Min memory usage: 10642896 / 8607624 kb
      0 / 2000 failed to complete.
      Transaction rate: 2328.288707799767 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 0.938 seconds.
      Max/Min memory usage: 11896760 / 9081064 kb
      0 / 2000 failed to complete.
      Transaction rate: 2132.1961620469083 txn/sec.

       
    • Mathew
      Mathew
      2004-11-27

      Recently i came across a free Java database, named, One$DB. This is the free version of a commercial product Daffodil DB  (www.daffodildb.com).

      This is free for commercial usage too. I think that this is worth a product to include in your comparison.

       
    • Hey mathew.  Fair enough.

      I don't have access to a fast machine tonight (my twin 2GHz home workstations are in the shop, I'm defragging my wife's machine, and I don't have VPN configured on the kids Win98 box to run remote desktop on my 2.8 GHz machine at work configured), sooooo.....

      Microsoft Windows 98 4.10.2222 A
      Clean install using Full OEM CD /T:C:\WININST0.400 /SrcDir=D:\WIN98 /IZ /IS /IQ /IT /II /NR /II /C  /U:xxxxxxxxxxxxxxxxx
      IE 5 6.0.2800.1106
      Uptime: 0:05:38:07
      Normal mode
      On "C3A7K8" as "campbell"

      AuthenticAMD AMD-K6tm w/ multimedia extensions
      256MB RAM
      78% system resources free
      Custom swap file on drive C (1284MB free)
      Available space on drive C: 1284MB of 3585MB (FAT32)
      Available space on drive D: 53MB of 391MB (FAT)
      Available space on drive E: 311MB of 1503MB (FAT32)

      SYSTEM.INI

      // tweaked a bit until empirically derived best overall system performance achieved

      [386Enh]
      device=TDIMSYS.VXD
      device=dva.386
      ebios=*ebios
      woafont=dosapp.fon
      mouse=*vmouse, msmouse.vxd
      device=*dynapage
      device=*vcd
      device=*vpd
      device=*int13
      device=*enable
      keyboard=*vkd
      display=*vdd,*vflatd
      PagingDrive=C:
      MinPagingFileSize=262144
      MaxPagingFileSize=262144
      DMABufferSize=64
      32BitDiskAccess=on
      32BitFileAccess=on
      MaxPhysPage=40000
      MinSPs=16
      PageBuffers=32

      [vcache]
      ChunkSize=512
      MinFileCache=1024
      MaxFileCache=65536

      Here's the output:

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: in.co.daffodil.db.jdbc.DaffodilDBDriver
      URL:jdbc:daffodilDB_embedded:testDb;path=.\onedb

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000

      Start: Sat Nov 27 22:33:53 CST 2004
      Initializing dataset...DBMS: DaffodilDB
      In transaction mode
      Already initialized
      done.

      Complete: Sat Nov 27 22:34:09 CST 2004
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 242.77 seconds.
      Max/Min memory usage: 16344112 / 7462120 kb
      0 / 2000 failed to complete.
      Transaction rate: 8.238250195658441 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 223.6 seconds.
      Max/Min memory usage: 17612880 / 8300496 kb
      0 / 2000 failed to complete.
      Transaction rate: 8.94454382826476 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 71.02 seconds.
      Max/Min memory usage: 23405312 / 8455808 kb
      0 / 2000 failed to complete.
      Transaction rate: 28.161081385525204 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 63.94 seconds.
      Max/Min memory usage: 31337064 / 13915728 kb
      0 / 2000 failed to complete.
      Transaction rate: 31.279324366593684 txn/sec.

      //------------------------------------------------

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:hsql\testDb

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000

      Start: Sat Nov 27 22:30:44 CST 2004
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Nov 27 22:30:56 CST 2004
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 17.03 seconds.
      Max/Min memory usage: 10453824 / 7833472 kb
      0 / 2000 failed to complete.
      Transaction rate: 117.43981209630064 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 10.99 seconds.
      Max/Min memory usage: 10000608 / 7846352 kb
      0 / 2000 failed to complete.
      Transaction rate: 181.98362147406732 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 4.72 seconds.
      Max/Min memory usage: 10425032 / 8457560 kb
      0 / 2000 failed to complete.
      Transaction rate: 423.7288135593221 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 5.27 seconds.
      Max/Min memory usage: 11061376 / 8917808 kb
      0 / 2000 failed to complete.
      Transaction rate: 379.50664136622396 txn/sec.

       
    • KSunitha
      KSunitha
      2005-01-17

      Hello,

      Can you please point me to where I can get the JDBCBench benchmark (by Mark Matthew)  from.  I tried the old Mark Matthew's website but couldnt get to the source.

      Thanks,
      Sunitha.

       
    • KSunitha
      KSunitha
      2005-01-17

      Hello,

      Can you please point me to where I can get the JDBCBench benchmark (by Mark Matthew)  from.  I tried the old Mark Matthew's website but couldnt get to the source.

      Thanks,
      Sunitha.

       
      • Fred Toussi
        Fred Toussi
        2005-01-17

        JDBCBench source is in the org.hsqldb.test package and is distributed with HSQLDB.

         
    • terryn
      terryn
      2005-01-31

      In review of your benchmark testing, I noticed that your test comparison against Derby showed HSQLDB winning hands down, and later comparing against DaffodilDB, the HSQLDB numbers looked like the Derby numbers.

      What caused the significant drop in performance of the HSQLDB database?

       
      • Fred Toussi
        Fred Toussi
        2005-01-31

        It says at the top of the tests that the first set was done on a 2.8MHz machine, while the second set was done on a an old AMD K6 machine. That obviously accounts for the difference in absolute speed.

         
    • KSunitha
      KSunitha
      2005-02-02

      Hello,

      After looking closely at the benchmark I realized that the test was not generating the correct input values for the transaction to happen correctly. Thus it is not measuring the actual work.

      The branches id is not generated properly - it generates invalid values and thus the transactions are actually *not doing* what they are supposed to do. ie  the updates are not doing an actual update as the where clause with 'where bid =?' will not be satisfied..

      The problem lies in this function :
          public static int getRandomID(int type)
          {
              int min, max, num;

              max = min = 0;
              num = naccounts;
             
              switch(type) {
              case TELLER:
                      min += nbranches;
                      num = ntellers;
                      /* FALLTHROUGH */
              case BRANCH:
                      if (type == BRANCH)
                              num = nbranches;
                      min += naccounts;
                      /* FALLTHROUGH */
              case ACCOUNT:
                      max = min + num - 1;
              }
              return (getRandomInt(min, max));
          }

      The fallthrough is incorrect. The branches id generated is in 100000 etc.. depending on the naccounts whereas the branches in the test is 1. As this benchmark mimics the tpc-b benchmark I think it should make sure that the transactions are doing the work that they are supposed to. AFAIK, tpc-b measures the tps for valid data.

      Thus it does not seem like currently JDBCBench is measuring what it needs to measure.

      I made the below changes to the getRandomInt method to generate valid values for the test.
      ---------------------------------
          public static int getRandomID(int type) {

              int min, max, num;

              max = min = 0;
              num = naccounts;

              switch (type) {

                  case TELLER :
                     // min += nbranches;
                      num = ntellers;
                      break;

                  /* FALLTHROUGH */
                  case BRANCH :
                      if (type == BRANCH) {
                          num = nbranches;
                      }
                      break;
                      //min += naccounts;

                  /* FALLTHROUGH */
                  case ACCOUNT :
                  {
                      max = min + num - 1;
                   }
              }

              max = min + num -1;
              return (getRandomInt(min, max));
          }

      Sunitha.

       
      • Fred Toussi
        Fred Toussi
        2005-02-02

        Thanks for the correction. It would of course be better if the benchmark corresponds better to a real-life scenario. Will re-check and include the correction in the next release.

         
    • KSunitha
      KSunitha
      2005-02-02

      I believe that is not fair to compare Derby and Hsqldb because of fundamental differences in their behavior. The numbers above are not indicative of the performance differences. It is *not* an apples to apples comparison.  The test is not doing the same work in hsqldb and Derby.  Quick example  In case of hsqldb the test is running at read uncommitted, whereas in derby it is running at read committed.

      I researched  on Derby and HSQLDB and here are the reasons  some important differences in the functionality provided by hsqldb and derby which in turn contribute to how they perform.

      1) Transactions:
      HSQLDB does *not* support transaction isolation. All transactions run in read uncommitted ( dirty read mode). Transactions read dirty data ( uncommitted data) which is not what you want in case of update transactions/ lets say a banking application like tpc-b.  -  http://hsqldb.sourceforge.net/doc/guide/ch02.html#N104D5

      Derby supports *all* transaction isolation levels. The default transaction isolation that it runs in is READ_COMMITTED. Derby is guaranteeing that if you run in read committed mode, you will not be reading dirty data.

      2) In-memory database:
      HSQLDB by default creates table in memory , so if you use CREATE TABLE the table is in memory. This means with large amounts of data there is high memory utilization and the application may be limited to the amount of memory available and may perform slow if table does not fit in memory.

      http://www.jboss.org/wiki/Wiki.jsp?page=ConfigJBossMQDB talks about out of memory errors as a result.
      http://www.devx.com/IBMCloudscape/Article/21773 this article talks about how this could lead to scalability issues

      Derby is disk based. Derby uses a page cache to keep recently used pages in memory and writes data to disk. Thus the memory consumption is stable and can be used for large amounts of data.

      This difference is important to note as the speed in these 2 cases are different. This seems to be reason why it is not ideal to compare speed differences here.

      3) Reliability:

      Derby is guaranteeing that if your system crashes in any way that committed transactions will remain committed. This requires that when a transaction commits, logs are synced to the disk. Syncing to the disk takes time.

      But on the other hand, it seems like hsqldb is not failsafe as the log file is not flushed (synced) to the disk after a commit.  Default is the file sync happens every 60 seconds. That means you could lose transactions in between &#61516;

      http://nagoya.apache.org/eyebrowse/ReadMsg?listName=derby-user@db.apache.org&msgNo=11
      http://nagoya.apache.org/eyebrowse/ReadMsg?listName=derby-user@db.apache.org&msgNo=13 gives a little more detail on the log flusher thread for hsqldb.

      Thus it seems necessary to consider these differences before one compares raw numbers.

      Some other links which also talk similarly about  differences in hsqldb and derby:
      http://forums.atlassian.com/thread.jspa?threadID=6153&messageID=248904679
      http://developers.slashdot.org/comments.pl?sid=127289&threshold=1&commentsort=0&tid=221&tid=198&tid=136&tid=108&tid=8&tid=2&mode=thread&pid=10640524#10642178
      http://www.luisdelarosa.com/blog/2004/10/whatever_happen.html
      post by stephane TRAUMAT
      http://www.jboss.org/wiki/Wiki.jsp?page=ConfigJBossMQDB

       
      • Fred Toussi
        Fred Toussi
        2005-02-02

        First of all please introduce yourself and state why you are taking time compiling information, then posting them in different places. (Web searches show that you have posted the same stuff elsewhere already). The sourceforge database shows you registered only two weeks ago. Your previous question shows you are not familiar with HSQLDB.

        You are comparing Derby and HSQLDB yourself, aren't you? But you say: "I believe that is not fair to compare Derby and Hsqldb because of fundamental differences in their behavior."

        You probably mean that it is not fair to compare the relative speed of the two product. At the same time, you seem to think it is OK to compare other aspects, using whatever verified or unverified information you can link to.

        Users need comparisons to help them decide on their own deployment. Speed is one of the areas that concerns users most. Comparison with Derby is fair as it has been touted as a replacement for HSQLDB for embedding into applications.

        As for the specifics that you mention:

        1. Many applications do not need transaction isolation. This is even more the case for embedding a database in an application. (Do you run a bank on an application with an embedded database?) Real database application developers are much better informed and intelligent than you seem to think they are and would use a product on the basis of the needs of their applications.

        2. Again, app developers will read the manual and use CREATE CACHED TABLE for large tables -- and importantly, set the cache_scale property according to their JVM memory allocation and the usage patter of their app. The JBoss link you mention there show that they did not do any of this, because they included the DB for test purposes only (and it refers to versions of HSQLDB and Hypersonic dating back years - the new version 1.8.0 has been stress tested in production JBoss isntance). The other link is to some blatant marketing article, not a real usage scenario.

        3. Derby is not guaranteeing that if your system crashes, the committed transactions remain valid. I don't think the developers ever said that, or more importantly, offer to pay any compensation if your system crashes and committed transactions are lost.

        Basically, Derby opens a random access file in a mode that is vaguely stated in the Javadoc published by Sun to save the changes to the disk as they occur. This is by no means a GUARANTEE. HSQLDB, on the other hand, has been designed based on the recognition that synching to disk is just one aspect is a complex set of issues, and performs the sync at user defined intervals. Again, the default may be sensible for the average deployment scenario, but the user has the choice.

         
      • I agree that your point 1.) about read_committed isolation and file sync issues is valid.

        I'll rerun the test with derby in read_commited mode to match HSQLDB, since it is not yet possible to run HSQLDB in read_committed mode (although there is ongoing work to this end).

        Regarding point 2.), you should look more closely at the benchmark test code:

                            if (DriverName.equals("org.hsqldb.jdbcDriver")) {
                                tableExtension  = "CREATE CACHED TABLE ";
                                ShutdownCommand = "SHUTDOWN";
                            }

        That is, the benchmark uses HSQLDB CACHED tables, which are file based and, like derby, uses a buffer manager to keep recently used data in memory and write data to disk as required to a memory use ceiling.

        Regarding point 3.), it is certainly possible to lower the log synch interval, and I will rerun the test as indicated above and with the HSQLDB log synch interval lowered to the minimum (1 second, I believe).

        The really important thing, however (yes: losing commmited data is to be avoided,) is to unsure that the database cannot enter an inconsistent state, i.e. uncommited changed become commited in the case of an abend, and HSQLDB ensures this.  

        So really, the difference you state, in most real world scenarios, will simply look logically like, at most, a "system disaster" occured one minute before the actual physical occurence.

        With any DBMS, it's possible to lose uncommited work in the case of an abend and even commited work in the case of a "system disaster"  That's why most production systems make at least weekly full  database backups and nightly "diff" backups.  i.e., it is commonly acknowleged that a DBMS can cover only a subset of the persistence guarantees required by most applications.  A wider and more thorough policy must be implemented for enterprise (or even personal use) scenarios, for "n nines" quality guarantees.

        Campbell.

         
    • Hi all.

      Well, I finally got around to doing a (fairer) comparison between Derby and HSQLDB.

      As promised, I altered the JDBCBench TPC-B test to set Derby in READ_UNCOMMITED mode (although I did not have the time yet to alter the TPC-B transaction details generation to ensure any ratio of conflicting reads/writes or anything like that).

      Also as promised, I ran the following tests with various settings of WRITEDELAY for HSQLDB.  With WRITEDELAY FALSE, the log is actually synched to disk upon each commit (each statement, in autocommit),  With WRITEDELAY 1, it is synched at one second intervals, and so on.

      As Fred mentions above, "Derby opens a random access file in a mode that is vaguely stated in the Javadoc published by Sun to save the changes to the disk as they occur"

      And it was quite obvious to the ear (cruch, cruch) that HSQLDB WRITEDELAY FALSE is doing full (non-optimised) sych, whereas Derby is much quiter (so I'm assuming that Fred's observation is correct:  Derby simply opens RAF with "rwd" (or "rws, perhaps).

      This round of tests is on my Fedora Core II box at home, which

      Anyway, here's Derby (10.0.2.1 release) embedded, using READ_UNCOMMITED isolation:

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.apache.derby.jdbc.EmbeddedDriver
      URL:jdbc:derby:sampleDB

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 04:35:36 CST 2005
      Initializing dataset...DBMS: Apache Derby
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 04:35:39 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 154.87 seconds.
      Max/Min memory usage: 19148848 / 8843664 kb
      0 / 2000 failed to complete.
      Transaction rate: 12.914056950991153 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 138.105 seconds.
      Max/Min memory usage: 14726648 / 7627216 kb
      0 / 2000 failed to complete.
      Transaction rate: 14.48173491184244 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 13.637 seconds.
      Max/Min memory usage: 12078288 / 8290696 kb
      0 / 2000 failed to complete.
      Transaction rate: 146.65982254161472 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 10.616 seconds.
      Max/Min memory usage: 10695912 / 7170528 kb
      0 / 2000 failed to complete.
      Transaction rate: 188.39487565938208 txn/sec.

      Here's HSLQDB 1.8.0 (my repository copy from the dev1 CVS folder), using WRITEDELAY FALSE (and READ_UNCOMMITED, since the new TX work is not ready to benchmark):

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 04:48:26 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 04:48:28 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 17.576 seconds.
      Max/Min memory usage: 8894344 / 6249800 kb
      0 / 2000 failed to complete.
      Transaction rate: 113.7915339098771 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 9.752 seconds.
      Max/Min memory usage: 10294944 / 8040008 kb
      0 / 2000 failed to complete.
      Transaction rate: 205.08613617719442 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 12.835 seconds.
      Max/Min memory usage: 8195120 / 6017320 kb
      0 / 2000 failed to complete.
      Transaction rate: 155.82391897156214 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 7.051 seconds.
      Max/Min memory usage: 9695600 / 7652040 kb
      0 / 2000 failed to complete.
      Transaction rate: 283.6477095447454 txn/sec.

      Certainly, these times are much closer (and to be fair, Derby's optimiser and bytecode generation for exection plan stuff likely has an overhead that cannot be adequately tested unless JDBC bench is modified yet again to allow doing multiple full runs without shutting down the system).

      Here's HSQLDB 1.8.0 again, with WRITEDELAY 1
      (yes: I've heard the argument that one can lose quite a bit of work in one second...but really, there's not many cases in OLTP where transactions have greater than vastly subsecond durations, and your're likely to lose far fewer if your transaction rate peaks out at 200/second...ha, ha...I mean if you sync to disk upon each commit...ok it's a trade off...If your app really dies badly, its likely because something terrible happend (power out, hardware failure, core dump, etc.), all of which are pretty unusual events in the overall scheme of things....most enterprise setups I've worked at or heard tell of promise no more than guaranteed durability of data up to the predeeding work day...i.e. restore to previous day from nightly diffs or restore from weekly full and then roll forward from nightly diffs upto the preceeding day):

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 05:09:54 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 05:09:57 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 5.829 seconds.
      Max/Min memory usage: 8832464 / 6230480 kb
      0 / 2000 failed to complete.
      Transaction rate: 343.112026076514 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 3.523 seconds.
      Max/Min memory usage: 10462480 / 8047328 kb
      0 / 2000 failed to complete.
      Transaction rate: 567.6979846721543 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 0.991 seconds.
      Max/Min memory usage: 7724152 / 6041192 kb
      0 / 2000 failed to complete.
      Transaction rate: 2018.1634712411706 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 1.033 seconds.
      Max/Min memory usage: 9698976 / 7689832 kb
      0 / 2000 failed to complete.
      Transaction rate: 1936.1084220716361 txn/sec.

      Here's the result with WRITEDELAY 2

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 05:11:56 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 05:11:58 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 5.791 seconds.
      Max/Min memory usage: 8877624 / 6270440 kb
      0 / 2000 failed to complete.
      Transaction rate: 345.36349507857017 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 3.545 seconds.
      Max/Min memory usage: 10384264 / 8087880 kb
      0 / 2000 failed to complete.
      Transaction rate: 564.1748942172073 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 0.99 seconds.
      Max/Min memory usage: 7690608 / 6042360 kb
      0 / 2000 failed to complete.
      Transaction rate: 2020.2020202020203 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 1.035 seconds.
      Max/Min memory usage: 9705696 / 7702288 kb
      0 / 2000 failed to complete.
      Transaction rate: 1932.3671497584542 txn/sec.

      Here's the result with WRITEDELAY 60

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 05:13:27 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 05:13:30 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 5.741 seconds.
      Max/Min memory usage: 9147624 / 6475528 kb
      0 / 2000 failed to complete.
      Transaction rate: 348.3713638738896 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 3.624 seconds.
      Max/Min memory usage: 10563032 / 8289552 kb
      0 / 2000 failed to complete.
      Transaction rate: 551.8763796909492 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 1.05 seconds.
      Max/Min memory usage: 7991168 / 6046776 kb
      0 / 2000 failed to complete.
      Transaction rate: 1904.7619047619046 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 1.03 seconds.
      Max/Min memory usage: 9807984 / 7730104 kb
      0 / 2000 failed to complete.
      Transaction rate: 1941.7475728155339 txn/sec.

      Finally, just to make sure there aren't any other objections regarding settings that can be made (beyond this requires coding), here's HSQLDB with every performance reducing thing I can think of set toi the "worst" value:

      #data lengths are checked and enforced for every row insert/update
      sql.enforce_strict_size=true

      # user "regular" rather than "nio" file access
      hsqldb.nio_data_file=false

      #reduce max number of allowed buffered rows to > 1K
      hsqldb.cache_scale=8

      # log is sync'ed for each commit:
      WRITEDELAY FALSE

      And here's the result:

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 05:18:07 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 05:18:15 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 19.358 seconds.
      Max/Min memory usage: 4138616 / 1665264 kb
      0 / 2000 failed to complete.
      Transaction rate: 103.31645831180907 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 12.476 seconds.
      Max/Min memory usage: 4036648 / 1961896 kb
      0 / 2000 failed to complete.
      Transaction rate: 160.30779095864057 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 16.092 seconds.
      Max/Min memory usage: 2912608 / 1952336 kb
      0 / 2000 failed to complete.
      Transaction rate: 124.28535918468805 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 10.759 seconds.
      Max/Min memory usage: 2838712 / 1862672 kb
      0 / 2000 failed to complete.
      Transaction rate: 185.89088205223533 txn/sec.

      So, for prepared statements using multistatement transactions, comparing Derby in READUNCOMMITED isolation (to satisfy any previous objections concerning speed differences dues to enforcing better isolation) and HSQLDB with all known (speed) performance setting adjusted to most lower speed, we get essentially the same result:

      Derby:     Transaction rate: 188.39487565938208 txn/sec.
      HSQLDB Transaction rate: 185.89088205223533 txn/sec.

      I think we can safely ignore the 2.5 txn/sec difference, as I'm sure under multiple runs, that could go either way.  Java is notorious for introducing noise well into the 2-5% range (due to hotspot effects, etc.), unless metrics are collected and averaged over multiple runs (allowing JVM to warm up) and usually only if each run consists of a large enough amount of work (unpredicatble thread switching overhead and the low precision of System.currentTimeMillis on very short duration runs adds a great deal of noise).

      But what have we learned here?

      Mostly, from what I can wager (based on the makup of a TPC-B transaction, which is very short and low volume), in this last case we have tested the speed at which the OS file sync command can operate, given the drive subsystem attached to my computer.

      I did skim the Derby docs, looking for sections pretaining to control of logging characteristics.  However, I did not find anything terribly promising here on a first pass. 

      But then Derby is open source, so I can crawl into the internals and interpose some logic to add WRITEDELAY functionality there to do better comparison (without the drive subsystem becoming the dominating factor for JDBCBench).

      And maybe if I (find the time to) do that (you never know), it can be a patch and maybe even become an option for Derby users (choice is good...not everyone wanting to use an embedded database wants to be forced into <200 txn/sec rates when given the choice to trade this off against the generally miniscule risk of potentiallyf losing a small (one second) amount of work.

      cache priming and setting initialPages, et. al. did look interesting (I'll play around with this as time allows, I guess).

      Anyway, here's a last kick at the cat, doing the opposite of the last test (ensuring worst speed performance for HSQLDB).

      In this test, I took the liberty of recompiling HSQLDB to open it's RAF data file in "rwd" mode.

      And I reset the database properties to optional levels.

      cache scale was set to 18 (default 12?)
      nio file access set true
      strict size set false

      Here's the result:

      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 06:13:05 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Drop old tables if they exist
      Creates tables
      Delete elements in table in case Drop didn't work
      Using prepared statements
      Insert data in branches table
      Insert data in tellers table
      Insert data in accounts table
              10000    records inserted
              20000    records inserted
              30000    records inserted
              40000    records inserted
              50000    records inserted
              60000    records inserted
              70000    records inserted
              80000    records inserted
              90000    records inserted
              100000   records inserted
      done.

      Complete: Sat Apr 23 06:13:10 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 4.707 seconds.
      Max/Min memory usage: 38172400 / 34035336 kb
      0 / 2000 failed to complete.
      Transaction rate: 424.8990864669641 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 2.823 seconds.
      Max/Min memory usage: 38781688 / 34562296 kb
      0 / 2000 failed to complete.
      Transaction rate: 708.4661707403471 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 0.482 seconds.
      Max/Min memory usage: 37731744 / 35063600 kb
      0 / 2000 failed to complete.
      Transaction rate: 4149.377593360996 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 0.405 seconds.
      Max/Min memory usage: 38206984 / 34883048 kb
      0 / 2000 failed to complete.
      Transaction rate: 4938.271604938272 txn/sec.
      [root@localhost drivers]# sh ./jdbcbench-hsqldb-file.sh
      *********************************************************
      * JDBCBench v1.1                                        *
      *********************************************************

      Driver: org.hsqldb.jdbcDriver
      URL:jdbc:hsqldb:file:jdbcbench/db

      Scale factor value: 1
      Number of clients: 2
      Number of transactions per client: 1000
      Transaction isolation: READ_UNCOMMITTED

      Start: Sat Apr 23 06:13:45 CST 2005
      Initializing dataset...DBMS: HSQL Database Engine
      In transaction mode
      Already initialized
      done.

      Complete: Sat Apr 23 06:13:47 CST 2005
      * Starting Benchmark Run *

      * Benchmark Report *
      * Featuring <direct queries> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 5.032 seconds.
      Max/Min memory usage: 36467848 / 32491616 kb
      0 / 2000 failed to complete.
      Transaction rate: 397.456279809221 txn/sec.

      * Benchmark Report *
      * Featuring <direct queries> <transactions>
      --------------------
      Time to execute 2000 transactions: 2.8 seconds.
      Max/Min memory usage: 37065000 / 33011704 kb
      0 / 2000 failed to complete.
      Transaction rate: 714.2857142857143 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <auto-commit>
      --------------------
      Time to execute 2000 transactions: 0.517 seconds.
      Max/Min memory usage: 36720408 / 33509256 kb
      0 / 2000 failed to complete.
      Transaction rate: 3868.4719535783365 txn/sec.

      * Benchmark Report *
      * Featuring <prepared statements> <transactions>
      --------------------
      Time to execute 2000 transactions: 0.406 seconds.
      Max/Min memory usage: 36787664 / 33474368 kb
      0 / 2000 failed to complete.
      Transaction rate: 4926.108374384236 txn/sec.

      What would be quite interesting would be to offer HSQLDB users the choice to use the "rws" or "rwd" modes for the transaction log file.

      Of course, this would require using a random access file for log file output, since there is no java API hook for setting "rws" or "rwd" mode for FileOutputStream.

      But then HSQLDB could claim that it fully supports every bit as much a guarantee regarding log (and data file) write safety as Derby (but with far, far better speed performance in READ_UNCOMMITED mode, as demonstrated above).

      Anyway,  I'll promise here to put up another set of results after I have time to review and improve JDBCBench (or confirm it has been improved in the ways described in a post above) and after I have time to become familiar with how to properly tune Derby in terms of buffer managment to do semi-apples-to-apples settings on Derby regarding HSQLDB cache scale settings.

      Bye for now,
      Campbell