HA-JDBC can activate and deactivate my databases ok, but activate is not doing any synchronization. I tried with dump-restore and full. Perhaps this is related to doing programmatic setup instead of config.
The symptom is I call activate and it runs very quick, too quick to be doing any synchronization I guess. DatabaseClusterImpl then restarts the db in active list, but eventually I get an error sometime later due to mismatch contents of my databases (I assume for an update).
Mar 07, 2014 4:54:45 PM net.sf.hajdbc.logging.slf4j.SLF4JLogger log
SEVERE: A result [1] of an operation on database db2 in cluster net.sf.hajdbc.sql.DatabaseClusterImpl@747254 did not match the expected result [0]
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Are you trigerring activate(...) programmatically or via JMX? Are you specifying a specific sync strategy to the activate method? If not, what value are you using for the default synchronization strategy?
Are you using Oracle? dump-restore is only implemented for mysql and postgresql.
My hunch is that the changes we previously made to the behavior of the DatabaseMetaData.getTables(...) is to blame. If this method returns no results, then sync would be "fast", because nothing is actually sync'ing.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am using MySQL with programmatic activation. Sync strategy map contains dump-restore, and default strategy is dump-restore. It makes no difference if I switch to full strategy either.
Here is a JUnit test to demonstrate the issue.
The basic idea is to start with db1 and db2. I create a table and insert a row. Then I deactivate db1, and listener confirms it. Next I insert another row, so db2 has two rows now. Then I try to activate. Activation attempt #1 with DatabaseClusterImpl.activate(String) throws SQL Exception caused by NullPointerException related to MySQL JDBC password initialization, so I cannot use it. Activation attempt #2 with DatabaseClusterImpl.activate(DataSourceDatabase,StateManager) runs without error, and listener confirms db1 is active now. However, as soon as I execute UPDATE sql on the rows, I get ResultSet size mismatch exception in log from HA-JDBC, and listener shows db1 is immediately deactivated again.
Can you paste the full stacktrace of the SQLException from the DatabaseClusterImpl.activate(String) method?
That's the issue we need to resolve.
DatabaseCluster.activate(Database, StateManager) is not meant to synchronize the database - so don't use this method for that purpose. I never initially intended for users to trigger activation w/synchronization programmatically - it was always meant to be an administrative task (i.e. performed via JMX), which is why the appropriate methods are not adequately exposed via the DatabaseCluster interface. I'll address this in the next release.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Using DatabaseClusterImpl.activate(String) makes sense. Looking back at my working HA-JDBC 2.0.15 code, I was using DriverDatabaseClusterMBean.deactivate(String) and DriverDatabaseClusterMBean.activate(String). That was a different class, but the method signature is the same.
By the way, my 3.0.0 code uses DatabaseClusterImpl.deactivate(DataSourceDatabase,StateManager) which seems to work. DatabaseClusterImpl.deactivate(String) also works. It is only the two activate() methods that are not working. I will switch to deactivate(String) to match usage of activate(String).
Here is the stack trace for DatabaseClusterImpl.activate(String).
Does db1 not have a password defined in your ha-jdbc configuration? It looks like HA-JDBC does not currently handle that case correctly. I'll fix this now.
You should be able to workaround the issue by supplying a ~/.my.cnf file in your home directory that provides the username/password to use for the dump/restore processes.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I set data source JDBC properties during programmatic configuration. You can see it in the JUnit test attached earlier to this thread.
Is the problem because I set JDBC password via DataSourceDatabase.setProperty() instead of DataSourceDatabase.setPassword()? If so I can switch for username and password.
Note, I don't see corresponding methods for setting url and driverClassName, so I assume they have to be set via generic setProperty().
Last edit: Justin Cranford 2014-03-10
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I just tried using DataSourceDatabase.setUsername() and DataSourceDatabase.setPassword(). I forgot those methods are forbidden by BasicDataSource class in DBCP 1.4. That is why I used DataSourceDatabase.setProperty("username",username) and DataSourceDatabase.setProperty("password",password) instead for HA-JDBC programmatic config.
java.sql.SQLException: Not supported by BasicDataSource
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:51)
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:35)
at net.sf.hajdbc.AbstractExceptionFactory.createException(AbstractExceptionFactory.java:62)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:95)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:34)
at net.sf.hajdbc.sql.CommonDataSource.getProxy(CommonDataSource.java:85)
at test.TestHAJDBCDemo.test(TestHAJDBCDemo.java:160)
at test.TestHAJDBCDemo.test(TestHAJDBCDemo.java:69)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Another alternative...
Extend BasicDataSource and implement getConnection(String user, String password).
e.g.
public class MyBasicDataSource extends BasicDataSource
{
@Override
public Connection getConnection(String user, String password)
{
return this.getConnection();
}
}
This way you can still configure your DataSourceDatabase with a user name and password (so it can by read by the dump/restore process) and DBCP won't complain about this method not being supported.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
To use this override of BasicDataSource, I would also have override BasicDataSourceFactory from Apache Commons 1.4. Using a delegator class seems like more straight forward variation to override BasicDataSource's implementation.
Note that I already have my own DataSourceDelegator class. I needed it to workaround JNDI read-only exception in Tomcat. It feels a bit hackish, but I can change my vanilla delegator of javax.sql.DataSource to redirect getConnection(String,String) to getConnection().
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Overriding getConnection(String,String) did not work. MySQLDialect.setPassword() still executes and gets NULL password from connection properties. However, I did discover a workaround!
Instead of calling both DataSourceDatabase.getUser() and DataSourceDatabase.setPassword(), I only set the password. That is enough to satisfy HA-JDBC 3.0.0 and Apache Commons 1.4.0.
That part is good, but then I hit another blocker. DumpSynchronizationStrategy gets an error logging into MySQL because it passes the wrong username. Here is the exception. Notice the username passed to MySQL incorrectly contains the hostname.
I traced the problem to DumpRestoreSynchronizationStrategy.java at lines 177-178. Line 178 gets the expected "rootpwd" password value from meta data, but line 177 gets wrong username "root@10.20.0.150" from meta data. The expected username is "root", not "root@10.20.0.150".
this.user=metaData.getUserName();// WRONG: root@10.20.0.150 (should be "root")this.password=database.decodePassword(context.getDecoder());// CORRECT: rootpwd
So, password workaround is workaround with DBCP 1.4.0, but username has a bug.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This is a implementation detail of MySQL's driver. Fortunately, you can configure the MySQL driver to omit the host name from DatabaseMetaData.getUserName() method by appending the property "useHostsInPrivileges=false" to your jdbc urls.
I tried useHostsInPrivileges=false. That gets me past the login error, so both username and password settings seem good now. However, now I get a different exception.
Looks like a problem with the mysqldump command generated by DumpRestoreSynchronizationStrategy. I am running on Windows 7 x64 with MySQL 5.5.15 x32. I stepped into HA-JDBC code, and this is the command it generated.
Running that on the command line gives me an error, although a slightly different one.
mysqldump: Got error: 1045: Access denied for user 'root'@'10.20.0.150' (using password: NO) when trying to connect
To fix this error on my system, I had to add a "-p" parameter after the username. I don't need to specify the password value on the command line, but the "-p" parameter is required for mysqldump to prompt for the password. Perhaps this is slightly different mysqldump behavior on Windows versus other platforms like Linux.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I switched from "dump-restore" to "full" synchronization to rule out any other issues. It works, so perhaps the only issue now is mysqldump command format compatibility on Windows.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The mysqldump command generated by HA-JDBC specifies the password via the MYSQL_PWD environment variable (or, alternatively via ~/.my.cnf). That's why it does not use the "-p" argument.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Not "my.ini". It needs to be named ".my.cnf" and be located in the home directory of the user under which the dump/restore process will run. See the mysql docs for details.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Does it make sense to change HA-JDBC to call getConnection(), or add the option to choose via a config setting somehow?
The problem seems to arise because of mixing calls to getConnection() and getConnection(String,String). I have no idea why Apache Commons 1.4.0 would refuse to implement both getConnection methods from javax.sql.DataSource, especially since they did implement both in 1.2.0. However, it is what it is, and the side effect is it makes HA-JDBC 3.0.0 incompatible with Apache Commons 1.4.0.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
HA-JDBC can activate and deactivate my databases ok, but activate is not doing any synchronization. I tried with dump-restore and full. Perhaps this is related to doing programmatic setup instead of config.
The symptom is I call activate and it runs very quick, too quick to be doing any synchronization I guess. DatabaseClusterImpl then restarts the db in active list, but eventually I get an error sometime later due to mismatch contents of my databases (I assume for an update).
Mar 07, 2014 4:54:45 PM net.sf.hajdbc.logging.slf4j.SLF4JLogger log
SEVERE: A result [1] of an operation on database db2 in cluster net.sf.hajdbc.sql.DatabaseClusterImpl@747254 did not match the expected result [0]
Are you trigerring activate(...) programmatically or via JMX? Are you specifying a specific sync strategy to the activate method? If not, what value are you using for the default synchronization strategy?
Are you using Oracle? dump-restore is only implemented for mysql and postgresql.
My hunch is that the changes we previously made to the behavior of the DatabaseMetaData.getTables(...) is to blame. If this method returns no results, then sync would be "fast", because nothing is actually sync'ing.
I am using MySQL with programmatic activation. Sync strategy map contains dump-restore, and default strategy is dump-restore. It makes no difference if I switch to full strategy either.
Here is a JUnit test to demonstrate the issue.
The basic idea is to start with db1 and db2. I create a table and insert a row. Then I deactivate db1, and listener confirms it. Next I insert another row, so db2 has two rows now. Then I try to activate. Activation attempt #1 with DatabaseClusterImpl.activate(String) throws SQL Exception caused by NullPointerException related to MySQL JDBC password initialization, so I cannot use it. Activation attempt #2 with DatabaseClusterImpl.activate(DataSourceDatabase,StateManager) runs without error, and listener confirms db1 is active now. However, as soon as I execute UPDATE sql on the rows, I get ResultSet size mismatch exception in log from HA-JDBC, and listener shows db1 is immediately deactivated again.
Last edit: Justin Cranford 2014-03-10
Can you paste the full stacktrace of the SQLException from the DatabaseClusterImpl.activate(String) method?
That's the issue we need to resolve.
DatabaseCluster.activate(Database, StateManager) is not meant to synchronize the database - so don't use this method for that purpose. I never initially intended for users to trigger activation w/synchronization programmatically - it was always meant to be an administrative task (i.e. performed via JMX), which is why the appropriate methods are not adequately exposed via the DatabaseCluster interface. I'll address this in the next release.
Using DatabaseClusterImpl.activate(String) makes sense. Looking back at my working HA-JDBC 2.0.15 code, I was using DriverDatabaseClusterMBean.deactivate(String) and DriverDatabaseClusterMBean.activate(String). That was a different class, but the method signature is the same.
By the way, my 3.0.0 code uses DatabaseClusterImpl.deactivate(DataSourceDatabase,StateManager) which seems to work. DatabaseClusterImpl.deactivate(String) also works. It is only the two activate() methods that are not working. I will switch to deactivate(String) to match usage of activate(String).
Here is the stack trace for DatabaseClusterImpl.activate(String).
Last edit: Justin Cranford 2014-03-10
Does db1 not have a password defined in your ha-jdbc configuration? It looks like HA-JDBC does not currently handle that case correctly. I'll fix this now.
You should be able to workaround the issue by supplying a ~/.my.cnf file in your home directory that provides the username/password to use for the dump/restore processes.
I set data source JDBC properties during programmatic configuration. You can see it in the JUnit test attached earlier to this thread.
Is the problem because I set JDBC password via DataSourceDatabase.setProperty() instead of DataSourceDatabase.setPassword()? If so I can switch for username and password.
Note, I don't see corresponding methods for setting url and driverClassName, so I assume they have to be set via generic setProperty().
Last edit: Justin Cranford 2014-03-10
Ah, yes. That's the problem. The dump/restore strategy uses the value of Database.getPassword() and passes that value to the dump/restore processes.
I just tried using DataSourceDatabase.setUsername() and DataSourceDatabase.setPassword(). I forgot those methods are forbidden by BasicDataSource class in DBCP 1.4. That is why I used DataSourceDatabase.setProperty("username",username) and DataSourceDatabase.setProperty("password",password) instead for HA-JDBC programmatic config.
Hmm. That's unfortunate. Have you tried specifying the password via a ~/.my.cnf file?
I tried my.ini. It did not work.
Another alternative...
Extend BasicDataSource and implement getConnection(String user, String password).
e.g.
This way you can still configure your DataSourceDatabase with a user name and password (so it can by read by the dump/restore process) and DBCP won't complain about this method not being supported.
To use this override of BasicDataSource, I would also have override BasicDataSourceFactory from Apache Commons 1.4. Using a delegator class seems like more straight forward variation to override BasicDataSource's implementation.
Note that I already have my own DataSourceDelegator class. I needed it to workaround JNDI read-only exception in Tomcat. It feels a bit hackish, but I can change my vanilla delegator of javax.sql.DataSource to redirect getConnection(String,String) to getConnection().
Overriding getConnection(String,String) did not work. MySQLDialect.setPassword() still executes and gets NULL password from connection properties. However, I did discover a workaround!
Instead of calling both DataSourceDatabase.getUser() and DataSourceDatabase.setPassword(), I only set the password. That is enough to satisfy HA-JDBC 3.0.0 and Apache Commons 1.4.0.
That part is good, but then I hit another blocker. DumpSynchronizationStrategy gets an error logging into MySQL because it passes the wrong username. Here is the exception. Notice the username passed to MySQL incorrectly contains the hostname.
I traced the problem to DumpRestoreSynchronizationStrategy.java at lines 177-178. Line 178 gets the expected "rootpwd" password value from meta data, but line 177 gets wrong username "root@10.20.0.150" from meta data. The expected username is "root", not "root@10.20.0.150".
So, password workaround is workaround with DBCP 1.4.0, but username has a bug.
This is a implementation detail of MySQL's driver. Fortunately, you can configure the MySQL driver to omit the host name from DatabaseMetaData.getUserName() method by appending the property "useHostsInPrivileges=false" to your jdbc urls.
e.g.
For details, see:
http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html
I tried useHostsInPrivileges=false. That gets me past the login error, so both username and password settings seem good now. However, now I get a different exception.
Last edit: Justin Cranford 2014-03-12
Looks like a problem with the mysqldump command generated by DumpRestoreSynchronizationStrategy. I am running on Windows 7 x64 with MySQL 5.5.15 x32. I stepped into HA-JDBC code, and this is the command it generated.
[mysqldump, -h, 10.20.0.150, -P, 3306, -u, root, db1, >, C:\tomcat\temp\ha-jdbc_6072594269671238714.dump]
Running that on the command line gives me an error, although a slightly different one.
mysqldump: Got error: 1045: Access denied for user 'root'@'10.20.0.150' (using password: NO) when trying to connect
To fix this error on my system, I had to add a "-p" parameter after the username. I don't need to specify the password value on the command line, but the "-p" parameter is required for mysqldump to prompt for the password. Perhaps this is slightly different mysqldump behavior on Windows versus other platforms like Linux.
I switched from "dump-restore" to "full" synchronization to rule out any other issues. It works, so perhaps the only issue now is mysqldump command format compatibility on Windows.
The mysqldump command generated by HA-JDBC specifies the password via the MYSQL_PWD environment variable (or, alternatively via ~/.my.cnf). That's why it does not use the "-p" argument.
I've committed a fix for that here:
https://github.com/ha-jdbc/ha-jdbc/commit/ff768da7697c93cb96d6f85cedcaf6a5b988446f
Not "my.ini". It needs to be named ".my.cnf" and be located in the home directory of the user under which the dump/restore process will run. See the mysql docs for details.
Quick question.
Does it make sense to change HA-JDBC to call getConnection(), or add the option to choose via a config setting somehow?
The problem seems to arise because of mixing calls to getConnection() and getConnection(String,String). I have no idea why Apache Commons 1.4.0 would refuse to implement both getConnection methods from javax.sql.DataSource, especially since they did implement both in 1.2.0. However, it is what it is, and the side effect is it makes HA-JDBC 3.0.0 incompatible with Apache Commons 1.4.0.
This is an interesting comment on Apache Commons 1.4.
http://stackoverflow.com/a/10997402