Support for this was added to HyperXtremeSQL a few years ago. No plan to add to HyperSQL.
Support for this was added to HyperXtremeSQL a few years ago. No plan to add to HyperSQL.
we are in 2026 and still this is not supported. Any plans on adding this support?
Reused statement invalidated
Fixed and committed to SVN (revision 6840)
core code updates - minor code reformat
core code updates - fix for bug #1740
will do. On Sun, Mar 29, 2026 at 8:03 PM Fred Toussi fredt@users.sourceforge.net wrote: status: open --> open-works-for-me assigned_to: Fred Toussi Comment: I cannot reproduce this. Please post a modified version of the working code below to cause the NPE. Connection connection = newConnection(); Statement statement = connection.createStatement(); statement.execute("SET DATABASE SQL SYNTAX MYS TRUE"); statement.execute("DROP TABLE test3 IF EXISTS"); statement.execute("CREATE TABLE...
NPE in Session.executeCompiledBatchStatement with "insert on duplicate key update" statements
I cannot reproduce this. Please post a modified version of the working code below to cause the NPE. Connection connection = newConnection(); Statement statement = connection.createStatement(); statement.execute("SET DATABASE SQL SYNTAX MYS TRUE"); statement.execute("DROP TABLE test3 IF EXISTS"); statement.execute("CREATE TABLE test3 (id INT GENERATED BY DEFAULT AS IDENTITY, data BIGINT)"); PreparedStatement preparedStatement = connection.prepareStatement("INSERT INTO test3 (id, data) VALUES (?, ?)...
Thank you, Fred! And thank you for all your work on HSQLDB. I have been a happy user for many years.
DatabaseMetaData.getTablePrivileges dont follow JDBC
DatabaseMetaData.getColumns() returns non-null SOURCE_DATA_TYPE for non-DISTINCT/non-REF columns
Cannot see the issue. The value of the column is null when the DATA_TYPE is not distinct.
JDBCSQLXML fails to compile with JDK 26+
Fix committed to SVN.
core code updates - fix for bug #1741
Strange characters and blank lines with text tables in a transaction
JavaSystem should not throw NumberFormatException during normal operation
Symmetrical join order affects results
Thanks. Fixed and committed to SVN.
core code updates - fix for bug #1744
TRUNC has different results in DST and standard time
Not a bug. Looks like the same feature as #1747.
DATEADD does. not take DST into account
This is not a bug. HSQLDB does not store the time zone as 'Europe/Paris'. It is immediately converted to the time offset of +01:00 hour. When you add one month to the timestamp, it keeps the original DST.
General error in LOCATE when offset is set to NULL
Thanks for reporting. Fixed and committed to SVN.
core code updates - fix for bug #1745
DATEADD does. not take DST into account
TRUNC has different results in DST and standard time
General error in LOCATE when offset is set to NULL
core code updates - system property textdb.allow_full_path now defaults to false
core code updates - minor enhancements
Thanks, email checked and replied to.
Hi Fred, would u mind to check the email from "slee3846@gatech.edu"? I'm also in the CC list (czhang887@gatech.edu). We compiled our recent security findings there. Thanks!
Hi Fred, would u mind to check the email from "slee3846@gatech.edu"? I'm also in the CC list (czhang887@gatech.edu). We compiled our recent security findings there. Thanks!
Sure, here is the SQL code that results from that test: CREATE TABLE A1 (id VARCHAR(255)); CREATE TABLE A2 (id VARCHAR(255)); CREATE TABLE A3 (id VARCHAR(255)); CREATE TABLE A4 (id VARCHAR(255)); INSERT INTO A1 (id) VALUES ('1'); INSERT INTO A2 (id) VALUES ('1'); INSERT INTO A3 (id) VALUES ('1'); INSERT INTO A4 (id) VALUES ('1'); -- Should return 1, fails by returning 0 SELECT COUNT(*) FROM A1 INNER JOIN A2 ON A1.id = A2.id WHERE EXISTS ( SELECT 1 FROM A3 INNER JOIN A4 ON A4.id = A3.id AND A4.id...
Please write a simple SQL query showing the issue. Your test is dependent on templates and needs to be dissected.
Symmetrical join order affects results
DatabaseMetaData.getColumns() returns non-null SOURCE_DATA_TYPE for non-DISTINCT/non-REF columns
The next point release, date not yet determined.
Hi @fredt, hope you don't mind me checking in, just wondering if / when you might have a fix for this released. I understand it's an open source project you're authoring and maintaining, so not trying to be rude and rush you. Thanks in advance
Please test without connection pool. I will check this before the next release and if there is a regression it will be fixed.
thanks for your quick inputs. Files don't exist. Could there be issues with e.g. connection pools?
Most properties are applied to a new database at the time of creation. Make sure the database files do not exist.
Just tested with 128000 that doesn't work either, the .script file still contains SET FILES CACHE SIZE 10000. Maybe that setting isn't supported in JDBC connections?
so that property will really only be considered only with certain values? couldn't a rounding be done in the code to consider all values?
The cache size prop must be in kilobytes, in this case 128000.
No effect setting hsqldb.cache_size & hsqldb.cache_rows in JDBC connection
JDBCSQLXML fails to compile with JDK 26+
Thanks for reporting. I agree with your suggestion. This will be applied to SVN.
I know this is a very old thread, but it was helpful to me when setting up a TLS secure HSQLDB server. I wanted to use an existing certificate issued by Let's Encrypt and went back to the HSQLDB documentation written by Blaine Simpson, but it references DERImport.java or DERImport.class which I cannot find, nor would I have the knowledge or skill to use if I found them. I figured that a combination of openssl and keytool must be able to achieve what I want and after much trial and effort the following...
Hi HSQLDB developers, I'm using sqltool to export csv files using the /xq command. It works great. Today, I tried the ALL_QUOTED option and I noticed two things about the output: Numeric values are quoted (this makes sense for an option called ALL_QUOTED) The null value (whether empty string or value of NULL_REP_TOKEN) is not quoted. Perhaps the null value should be quoted as well, when the ALL_QUOTED option is in effect. The reason I tried the ALL_QUOTED option today is that I was trying to reproduce...
JDBCSQLXML fails to compile with JDK 26+
Thanks!
Thanks for your interest. HSQLDB is supported by OSS-Fuzz. You are welcome to run assessments and report via OSS-Fuzz bug report system and coordinate disclosure with us via email.
Hi HyperSQL developers, We (LeeSinLiang, and Cen Zhang, and a lot of our team members) are Team Atlanta from Georgia Institute of Technology, winners of DARPA's AI Cyber Challenge (AIxCC). We're reaching out to propose a security assessment collaboration with your project. This effort is recommended by DARPA's initiative to apply competition technologies to real-world open source projects. Background We have built an AI-enhanced CRS (Cyber Reasoning System) for automatic vulnerability detection and...
Thanks for reporting. I think this occurs only with ON DUPLICATE KEY UPDATE. It will be fixed in the next point release. I will post here when it is committed and you can then compile the jar and test with you app.
I'd like to clarify this ticket. Firstly the milestone should be for the next 2.7.x point release - NOT 2.5.x . We only want this fixed in the latest version of HSQLDB. @fredt would you mind taking a look? What we are asking for is a Null check in a code flow that throws an NPE - no other logic change is needed. If you look at the latest version (2.7.4) of Session.java the method Session.executeCompiledBatchStatement can blow up on line 1609 as it assumes Result.getChainedResult() can not return...
I'd like to clarify this ticket. Firstly the milestone should be for the next 2.7.x point release - NOT 2.5.x . We only want this fixed in the latest version of HSQLDB. @fredt would you mind taking a look? What we are asking for is a Null check in a code flow that throws an NPE - no other logic change is needed. If you look at the latest version (2.7.4) of Session.java the method Session.executeCompiledBatchStatement can blow up on line 1609 as it assumes Result.getChainedResult() can not return...
I'd like to clarify this ticket. Firstly the milestone should be for the next 2.7.x point release - NOT 2.5.x . We only want this fixed in the latest version of HSQLDB. @fredt would you mind taking a look? What we are asking for is a Null check in a code flow that throws an NPE - no other logic change is needed. If you look at the latest version (2.7.4) of Session.java the method Session.executeCompiledBatchStatement can blow up on line 1609 as it assumes Result.getChainedResult() can not return...
I'd like to clarify this ticket. Firstly the milestone should be for the next 2.7.x point release - NOT 2.5.x . We only want this fixed in the latest version of HSQLDB. @fredt would you mind taking a look? What we are asking for is a Null check in a code flow that throws an NPE - no other logic change is needed. If you look at the latest version (2.7.4) of Session.java the method Session.executeCompiledBatchStatement can blow up on line 1609 as it assumes Result.getChainedResult() can not return...
Just to clarify: the bug was introduced in 2.5.1 but still exists in 2.7.4 <-- this bug has prevented us from upgrading. We are only looking for a fix to the latest hsqldb code base. We do not expect a back port to older versions. We're unfamiliar with the hsql bug review process. @fredt would you be able to guide as as to next steps as we would love to see this fixed so we can upgrade to the latest version.
Warning with JAVA 24 / 25 - Call to deprecated method
Thanks for reporting. It will be fixed for the next release.
Warning with JAVA 24 / 25 - Call to deprecated method
Thanks Fred, setting MVCC works well for me here. Each thread uses its own connection, runs for about ten minutes and commits at the end to allow restartability, so this was an easy change to make. Thanks again for your help.
Thank you. I will check this later.
It all depends on multiple factors, including the transaction model, how you use database connections and when you commit. Each thread must always use a separate connection to write to the database. In the default LOCKS mode, if multiple threads write to the same table, then the first thread's connection needs to be committed before any other thread can write. In the MVCC mode multiple connections can generally write without committing so long as there is no conflict.
I have a long running multi-threaded program which writes to the database every minute or so. What I've noticed is that after a thread writes to the database, that thread appears in Visual VM as parked, meaning that after a few minutes the program runs single-threaded. I have circumvented by caching the information intended for the database to a List, and writing it at thread completion - no big deal, but is this how the product is supposed to work? Is there a setting which I can tune to avoid i...
It's happening with 2.7.4 and every version sind 2.5.2
Reused statement invalidated
Thanks for reporting. Please note there have been a lot of changes since version 2.5.2. Please check with version 2.7.4 (either Java 8 or Java 11 jar).
Reused statement invalidated
NPE in Session.executeCompiledBatchStatement with "insert on duplicate key update" statements
Thanks for reporting. Will check it later.
Fred: I recently upgraded my JAVA to 64-Bit Server VM (build 25+37-LTS-3491, mixed mode, sharing) and a date of 2025-09-16 LTS from Oracle build 25+37-LTS-3491, mixed mode, sharing in preparation to test/use HSQL 2.7.5 when available. I noticed, when using DatabaseManagerSwing ( which I often use, thanks ) . . . a difference with HSQL 2.7.4 ( at least with Windows 11 ) when choosing to display a Select statement output, between JAVA 17 vs JAVA 25, when I elect to View -> Show results in Grid. With...
Hi Fred, System Logger makes it very easy to retrieve logs without having to resort to third-party libraries. It's true that this is after Java 8, but I see that a Java 11 version is available in the hsqldb driver pack. Using one interface and two different implementations makes it very easy to support logging before and after Java 9.
Regression in 2.7.4 with respect to auto-generated keys in prepared statements
Thanks for reporting. This issue has already been fixed and committed to SVN for the next release, which will happen after Java 25.
Looks like the Vert.x JDBC client is not even required for reproducing the issue try (Connection connection = DriverManager.getConnection("jdbc:hsqldb:mem:.")) { try (Statement statement = connection.createStatement()) { statement.execute("CREATE TABLE test2 (id INT, data BIGINT)"); try (PreparedStatement preparedStatement = connection.prepareStatement("INSERT INTO test2 (id, data) VALUES (?, ?)", Statement.RETURN_GENERATED_KEYS)) { preparedStatement.setInt(1, 1); preparedStatement.setLong(2, 42L);...
Regression in 2.7.4 with respect to auto-generated keys in prepared statements
For future reference, it's good to know where there are no checks. Still, literals such as DATE 'BC 0001-01-01 00:00:00' are not allowed
One can produce BC timestamps with to_timestamp('BC 0001-01-01 00:00:00', 'BC YYYY-MM-DD HH:MI:SS').
Fix committed to SVN.
core code updates - fix for bug #1736 eliminate caught exception (Julian Hyde)
Can't retrieve temporals in BC era
Hi Christian, For dates (and timestamps) I followed the SQL Standard which allows 0001 to 9999 for the year and does not allow BCE values. This is enforced in date-time literals which do not allow BCE values. When using assignments via Java PreparedStatement etc. and arithmetics involving INTERVAL there is no range checking.
Yes! Yes! Thank you!
You probably missed this: http://www.hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#src_jrt_access_control
I have a spring boot application which includes the symmetric-client.jar, but creating the required tables fails on the creation of a function. I've reduced the problem to this: package nl.softworks.calendarAggregator; import org.hsqldb.persist.HsqlProperties; public class Application { public static void main(String[] args) { HsqlProperties hsqlProperties = new HsqlProperties(); hsqlProperties.setProperty("server.port", 9147); hsqlProperties.setProperty("hsqldb.tx", "mvcc"); // multi version concurrency...
I have a spring boot application which includes the symmetric-client.jar, but creating the required tables fails on the creation of a function. I've reduced the problem to this: package nl.softworks.calendarAggregator; import org.hsqldb.persist.HsqlProperties; public class Application { public static void main(String[] args) { System.out.println("HsqlDbFunctions.encodeBase64: " + org.jumpmind.symmetric.db.hsqldb.HsqlDbFunctions.encodeBase64("test".getBytes())); HsqlProperties hsqlProperties = new...
Can't retrieve temporals in BC era
You asked: What about the OFFSET clause? The PK index is used but there is no quick way to skip to the first row. The reason being there may be gaps in the PK sequence. The rows are retrieved from the PK index and 80000 rows are discarded.
PS: What about the OFFSET clause? Can it not use the PK Index to skip to the first row? This takes 1.2secs: SELECT id FROM news ORDER BY id limit 25 offset 80000;
Thank you very much Fred. Yes, you are right. Interesting results. WITH combined index: SELECT id FROM news where id<80000 and date_sent is not null ORDER BY id, date_sent limit 25; --> 15ms SELECT id FROM news where id<80000 and date_sent is not null ORDER BY id limit 25 ; ---> 1.3 secs DROPPED combined index on id+date_sent: SELECT id FROM news where id<80000 and date_sent is not null ORDER BY id, date_sent limit 25 ; --> 2 secs SELECT id FROM news where id<80000 and date_sent is not null ORDER...
You can drop the index and try a WHERE clause with both columns. It should still be fast when the PK index is used.
Dear Fred, thank you for your help. You gave me THE important hint: I created the composite index (id+date_sent) because I wanted to speed up a "where clause" on both fields: This is extremely slow - and I didn't know why: SELECT id FROM news where id<80000 and date_sent < now ORDER BY id limit 25 ; But this is super fast: SELECT id FROM news where id<80000 and date_sent < now ORDER BY id, date_sent limit 25 ; The additional "order by" on "date_sent" after "id" is of course nonsense, but it seems...
I checked with the released jar and it shows the same result as yours. If you have to keep the useless index, you can fix the query speed by adding he timestamp column to the ORDER BY: explain plan for SELECT id FROM test where id<80000 ORDER BY id, date_sent limit 25 ;