I am upgrading from 2.0.15 to 3.0.0 GA. The main reasons are for programmatic configuration of HA-JDBC, JDBC 4.1 support, and use of dump-restore sync strategy. My platform is Tomcat 7.0.50, Java 7u51 x32, and MySQL 5.6.
The example "Programmatic Configuration" section on http://ha-jdbc.github.io/doc.html uses Driver-based database members, and a Driver-based connection for HA-JDBC proxy connections. I switched the code to DataSource-based database members, but how to encapsulate them in a HA-JDBC proxy DataSource.
Basically, can you add another example to the Programmatic Configuration section to code a configuration like this, and put it in a JNDI DataSource entry with name "java:comp/env/jdbc/database".
Programmatically, this would look very similar. Instead of creating a DriverDatabase, you would create a DataSourceDatabase, where the location is the jndi name instead of the jdbc url. e.g.
These DataSource must already be bound to jndi. Alternatively, you could pass in a class name as the location, and HA-JDBC will construct the DataSource using the configured serverName, portNumber, and databaseName properties.
Next, instead of a DriverDatabaseClusterConfiguration, you would create a DataSourceDatabaseClusterConfiguration. e.g.
DataSourceDatabaseClusterConfiguration config = new DataSourceDatabaseClusterConfiguration();
config.setDatabases(Arrays.asList(db1, db2));
// ...
Lastly, create the HA-JDBC DataSource, associate it with the configuration, and, if you wish, bind it to jndi. e.g.
I realized later the two simple mistakes I made converting the Driver-based Programmatic Configuration example to DataSource-based. Like your example shows, I had to 1) switch from static Driver.setConfiguration() invocation to non-static DataSource.setConfiguration(), and 2) switch "java" to "javax" in "java.sql.Driver" to "javax.sql.DataSource". They seem so simple now, but it tripped me up.
net.sf.hajdbc.sql.Driver setConfigurationFactory
net.sf.hajdbc.sql.Driver.setConfigurationFactory("mycluster", new SimpleDatabaseClusterConfigurationFactory<java.sql.Driver, DriverDatabase="">(config));
net.sf.hajdbc.sql.DataSource ds = new net.sf.hajdbc.sql.DataSource()
ds.setConfigurationFactory("mycluster", new SimpleDatabaseClusterConfigurationFactory<javax.sql.DataSource, DataSourceDatabase="">(config));
Once I got past the code error, I worked on this over the weekend and ended up trying the same InitialContext.bind(). It did not work because it throws a JNDI read-only exception. To workaround the run-time bind/rebind restriction, I implemented a simple DataSourceDelegatorFactory and DataSourceDelegator. This puts my standard Tomcat DBCP BasicDataSource object in a delegator wrapper object during JNDI lookup. I can then edit the delegate inside the JNDI bound DataSourceDelegator object to inject a net.sf.hajdbc.sql.DataSource object that I construct at run-time.
I put my code here in case other people might find it useful. If you think it is useful to add to HA-JDBC to workaround the InitialContext.bind read-only restriction, please feel free to use it or modify as you like.
// package declaration and imports
public class DataSourceDelegator implements javax.sql.DataSource {
private DataSource dataSourceDelegate = null;
public DataSourceDelegator(DataSource dataSourceDelegate) {this.dataSourceDelegate = dataSourceDelegate;}
public DataSource getDelegate() {return dataSourceDelegate;}
public void setDelegate(DataSource dataSourceDelegate) {this.dataSourceDelegate = dataSourceDelegate;}
// implement DataSource method which just pass invocation to dataSourceDelegate
}
DataSourceDelegatorFactory.java
// package declaration and imports
public class DataSourceDelegatorFactory implements javax.naming.spi.ObjectFactory {
public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable<?, ?> environment) throws Exception {
ObjectFactory dbcpDataSourceFactory = null;
try {
dbcpDataSourceFactory = (ObjectFactory) Class.forName(BasicDataSourceFactory.class.getCanonicalName()).newInstance();
DataSource basicDataSource = (DataSource) dbcpDataSourceFactory.getObjectInstance(obj, name, nameCtx, environment);
return new DataSourceDelegator(basicDataSource);
} catch (Exception e) {
NamingException ex = new NamingException("Could not create resource factory instance");
ex.initCause(e);
throw ex;
}
}
}
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The difference is that the HA-JDBC driver is used for all HA-JDBC database clusters, whereas the DataSource is per cluster.
Out of curiosity, are you individual bases already using connection pooling? If so, you probably don't want to use DBCP for the HA-JDBC datasource. A connection pool of connection pools will only cause headaches...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I have to use DataSource-based clustering in HA-JDBC because I have to manage 3 data sources in my Tomcat deployment. I have two schemas in MySQL, and one schema has two sets of credentials.
MySQL schema #1 is used by a legacy LAMP application bridged to HA-JDBC via a MySQL-to-JDBC bridge. For MySQL bridging, I verified Tungsten Myosotis can be used. For other databases, I verified PHP-Java-Bridge transparently runs PHP in Tomcat, and it allows PHP code to invoke Java methods directly like JDBC APIs.
MySQL schema #2 is used by a legacy Java web application. There are separate credentials for DDL and DML, though. This is an Identity Management product, so separation of privileges is mandatory for DDL versus DML. The reason is most deployments only allow DDL access by DBAs, whereas other deployments allow our application to access both credentials. To be clear, DDL versus DML means two user accounts with mutually exclusion permissions - CREATE/DROP/UPDATE/ALTER versus SELECT/INSERT/UPDATE/DELETE/TRUNCATE.
My application requirements are irrelevant to HA-JDBC. All it means is I cannot use Driver-based clustering in HA-JDBC, only DataSource clustering.
So, the clustering I am trying to setup is one BasicDataSource per member (with pooling) wrapped in a HA-JDBC DataSource object (no pooling). I have to do this 3 times, one for each cluster. Programmatic configuration is ideal because it would be tedious and error prone to do the same thing via XML config.
Do you think my setup viable, or should I change my approach? I think it matches what you said, but with the added DataSourceDelegator holding the HA-JDBC DataSource reference to workaround read-only JNDI bindings in Tomcat.
Last edit: Justin Cranford 2014-02-03
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am having trouble getting my programmatically configured cluster started. I get an exception during programmatic setting when calling DataSourceDatabaseClusterConfiguration.setLoginTimeout(4).
Failed to set login timeout, exception: javax.xml.bind.UnmarshalException
- with linked exception: [org.xml.sax.SAXParseException; systemId: file:/C:/tomcat/webapps/mywebapp/WEB-INF/classes/ha-jdbc-mycluster.xml; lineNumber: 7; columnNumber: 11; cvc-complex-type.2.4.b: The content of element 'ha-jdbc' is not complete. One of '{"urn:ha-jdbc:cluster:3.0":sync, "urn:ha-jdbc:cluster:3.0":state, "urn:ha-jdbc:cluster:3.0":lock, "urn:ha-jdbc:cluster:3.0":cluster}' is expected.]
Here is my HA-JDBC config. I am guessing I don't need one at all, but I cannot figure out how to programmatically configure HA-JDBC to use my custom JGroups tcp-sync config.
Here is code I am trying to use to programmatically configure my HA-JDBC DataSource...
ArrayListdataSourceDatabases =newArrayList(1);// cluster of 1 node for testfor(inti =0;i<this.databaseUrls.size();i++){booleanisLocalIpAddress =NetworkUtil.isLocal(this.databaseIpAddresses.get(i));// IP parsed from URLDataSourceDatabasebasicDataSource =newDataSourceDatabase();basicDataSource.setId("mydb"+(i+1));// mydb1, mydb2, mydb3, ...basicDataSource.setLocal(isLocalIpAddress);basicDataSource.setWeight(isLocalIpAddress?1:0);basicDataSource.setLocation(BasicDataSource.class.getCanonicalName());basicDataSource.setProperty("maxActive","100");basicDataSource.setProperty("maxIdle","25");basicDataSource.setProperty("minIdle","5");basicDataSource.setProperty("removeAbandoned","true");basicDataSource.setProperty("removeAbandonedTimeout","300");basicDataSource.setProperty("logAbandoned","false");basicDataSource.setProperty("validationQueryTimeout","3");basicDataSource.setProperty("validationQuery","SELECT 1");basicDataSource.setProperty("testOnBorrow","true");basicDataSource.setProperty("testOnReturn","false");basicDataSource.setProperty("maxWait","10000");basicDataSource.setProperty("testWhileIdle","false");basicDataSource.setProperty("timeBetweenEvictionRunsMillis","300000");basicDataSource.setProperty("numTestsPerEvictionRun","3");basicDataSource.setProperty("minEvictableIdleTimeMillis","1800000");basicDataSource.setProperty("url",this.databaseUrls.get(i));// jdbc:mysql://localhost:3306/mydbbasicDataSource.setProperty("driverClassName","com.mysql.jdbc.Driver");basicDataSource.setProperty("username","myuser");basicDataSource.setProperty("password","mypass");basicDataSource.setProperty("connectionProperties","useUnicode=true;characterSetResults=utf8;alwaysSendSetIsolation=false;useLocalTransactionState=true;cacheServerConfiguration=true;connectTimeout=4000;socketTimeout=6000;autoReconnect=true;maxReconnects=1;initialTimeout=2;maxAllowedPacket=-1");dataSourceDatabases.add(basicDataSource);i++;}DataSourceDatabaseClusterConfigurationconfig =newDataSourceDatabaseClusterConfiguration();config.setBalancerFactory(newRandomBalancerFactory());config.setDefaultSynchronizationStrategy("dump-restore");config.setDialectFactory(newMySQLDialectFactory());config.setDatabaseMetaDataCacheFactory(newEagerDatabaseMetaDataCacheFactory());config.setTransactionMode(TransactionModeEnum.SERIAL);config.setFailureDetectionExpression(newCronExpression("0 0 0 1 1 ? 2000"));// neverconfig.setAutoActivationExpression(newCronExpression("0 0 0 1 1 ? 2000"));// neverconfig.setIdentityColumnDetectionEnabled(false);config.setSequenceDetectionEnabled(false);config.setCurrentDateEvaluationEnabled(true);config.setCurrentTimeEvaluationEnabled(true);config.setCurrentTimestampEvaluationEnabled(true);config.setRandEvaluationEnabled(true);config.setEmptyClusterAllowed(false);config.setDurabilityFactory(newNoDurabilityFactory());config.setDispatcherFactory(newJGroupsCommandDispatcherFactory());config.setStateManagerFactory(newSimpleStateManagerFactory());config.setDatabases(dataSourceDatabases);net.sf.hajdbc.sql.DataSourceds =newnet.sf.hajdbc.sql.DataSource();ds.setUser("username");ds.setPassword("password");ds.setConfig("ha-jdbc-mycluster.xml");ds.setCluster("mycluster");try{ds.setLoginTimeout(4);}catch(SQLExceptione){e.printStackTrace();}ds.setTimeout(6,TimeUnit.SECONDS);ds.setConfigurationFactory(newSimpleDatabaseClusterConfigurationFactory<DataSource,DataSourceDatabase>(config));
I feel like I am getting closer, but lack of documented example is making this a slow exercise to figure out which HA-JDBC APIs to use. I am trying to reproduce what I had with HA-JDBC 2.0 configuration as a starting point, so your help would be much appreciated.
Last edit: Justin Cranford 2014-02-04
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I can't see your xml configuration, so I can't tell what's wrong with your xml. Make sure you indent any code blocks in your posts so that they get parsed correctly via markdown.
The DataSource.setConfig(...) and DataSource.setConfigurationFactory(...) methods are mutually exclusive. The setConfig(...) method is actually just shorthand for the XMLDataSourceClusterConfigurationFactory. Currently, the config property takes precedence over the configurationFactory property - which is why HA-JDBC is trying to parse your xml file instead of using your programmatic configuration.
To programmatically define a custom JGroups configuration, use the following when defining your DataSourceDatabaseClusterConfiguration:
JGroupsCommandDispatcherFactory cdf = new JGroupsCommandDispatcherFactory();
cdf.setStack("tcp-sync.xml");
config.setCommandDispatcherFactory(cdf);
I edited my post to indent my config and code. Thanks for the tips.
I removed setConfig() call.
I will check out the example.
I also had to remove setCluster() call because setLoginTimeout() also triggered looking for ha-jdbc-{0}.xml. My cluster initializes now, but with "null" as the cluster name in these jGroups startup message.
Finally, my cluster is not totally working yet. Calls to get a connection timeout.
java.sql.SQLException
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:51)
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:35)
at net.sf.hajdbc.AbstractExceptionFactory.createException(AbstractExceptionFactory.java:62)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry$RegistryEntry.getValue(LifecycleRegistry.java:135)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:60)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:34)
at net.sf.hajdbc.sql.CommonDataSource.getProxy(CommonDataSource.java:85)
at net.sf.hajdbc.sql.DataSource.getConnection(DataSource.java:51)
at com.mycompany.TransactionImpl.getDBConnection(TransactionImpl.java:254)
Caused by: java.util.concurrent.TimeoutException
... 30 more
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
OK - I see what's happening. There are 2 types of methods on the HA-JDBC DataSource:
1. Those relating to initial configuration
2. Proxied methods that delegate to the underlying DataSource implementation
Essentially, the methods on javax.sql.DataSource should not be called until the HA-JDBC DataSource is fully configured. The HA-JDBC Datasource also needs a cluster name. (that explains the null in your JGroups cluster name). e.g.
net.sf.hajdbc.sql.DataSource ds = new net.sf.hajdbc.sql.DataSource();
ds.setCluster("mycluster");
ds.setConfigurationFactory(new SimpleDatabaseClusterConfigurationFactory<DataSource, DataSourceDatabase>(config));
ds.setUser("username");
ds.setPassword("password");
// Now the HA-JDBC DataSource is fully configured
// These methods will trigger HA-JDBC to start, and
ds.setLoginTimeout(4);
ds.setTimeout(6, TimeUnit.SECONDS);
Admittedly, this is all a little awkward. It was designed with JNDI in mind, which requires a no-arg constructor. There is an ObjectFactory implementation, but it's cumbersome to use programmatically. I should probably provide a normal factory from which you can create the DataSource proxy. That way you can do something like:
DataSourceFactory factory = new DataSourceFactory(clusterName, configFactory);
// DataSourceFactory factory = new DataSourceFactory(clusterName, configFileName);
factory.setUser(...);
factory.setPassword(...);
javax.sql.DataSource ds = factory.createDataSource();
Thoughts?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
OK. I think I am closer to the real root exception. Re-arranging the calls and adding setCluster("mycluster") fixes the log message, but it does not resolve "Caused by: java.util.concurrent.TimeoutException".
That exception happens further down the line in my application, so I tried adding a getConnection() immediately after setting up HA-JDBC. I am getting this JGroups exception during HA-JDBC initialization triggered by getConnection() or even setLoginTimeout().
-------------------------------------------------------------------
GMS: address=hostname-42531, cluster=mycluster.lock, physical address=fe80:0:0:0:2495:acaf:fc2:2997%13:60583
-------------------------------------------------------------------
-------------------------------------------------------------------
GMS: address=hostname-47457, cluster=mycluster.state, physical address=fe80:0:0:0:2495:acaf:fc2:2997%13:60583
-------------------------------------------------------------------
java.sql.SQLException: cluster 'mycluster.lock' is already connected to singleton transport: [mycluster.state, dummy-1391543430267, mycluster.lock, dummy-1391543430658]
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:51)
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:35)
at net.sf.hajdbc.AbstractExceptionFactory.createException(AbstractExceptionFactory.java:62)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:95)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:34)
at net.sf.hajdbc.sql.CommonDataSource.getProxy(CommonDataSource.java:85)
at net.sf.hajdbc.sql.DataSource.getConnection(DataSource.java:51)
Caused by: java.lang.IllegalStateException: cluster 'mycluster.lock' is already connected to singleton transport: [mycluster.state, dummy-1391543430267, mycluster.lock, dummy-1391543430658]
at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:909)
at org.jgroups.JChannel.startStack(JChannel.java:864)
at org.jgroups.JChannel._preConnect(JChannel.java:527)
at org.jgroups.JChannel.connect(JChannel.java:321)
at org.jgroups.JChannel.connect(JChannel.java:297)
at net.sf.hajdbc.distributed.jgroups.JGroupsCommandDispatcher.start(JGroupsCommandDispatcher.java:104)
at net.sf.hajdbc.lock.distributed.DistributedLockManager.start(DistributedLockManager.java:126)
at net.sf.hajdbc.sql.DatabaseClusterImpl.start(DatabaseClusterImpl.java:683)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:76)
... 31 more
Here is the code snippet I use to initialize JGroups. I am pointing it at the udp.xml inside the HA-JDBC jar file for now, until I can figure out how to upgrade my previous HA-JDBC 2.0 / JGroups 2.6 configuration into HA-JDBC 3.0.
JGroupsCommandDispatcherFactory jgroupsFactory = new JGroupsCommandDispatcherFactory();
jgroupsFactory.setTimeout(6000);
jgroupsFactory.setStack("udp.xml");
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Removed creation of JGroupsCommandDispatcherFactory() and the call to setDispatcherFactory() does not resolve the problem. Now JGroups does not initialize, but I still get the java.util.concurrent.TimeoutException further down the line.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The first two methods start HA-JDBC and trigger this exception from JGroups:
Caused by: java.lang.IllegalStateException: cluster 'mycluster.lock' is already connected to singleton transport: [mycluster.state, dummy-1391543430267, mycluster.lock, dummy-1391543430658]
The getConnections() method triggers a different exception:
java.lang.AbstractMethodError: org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.isValid(I)Z
at net.sf.hajdbc.dialect.StandardDialect.isValid(StandardDialect.java:1004)
at net.sf.hajdbc.sql.DatabaseClusterImpl.isAlive(DatabaseClusterImpl.java:837)
at net.sf.hajdbc.sql.DatabaseClusterImpl.start(DatabaseClusterImpl.java:708)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:76)
at net.sf.hajdbc.util.concurrent.LifecycleRegistry.get(LifecycleRegistry.java:34)
at net.sf.hajdbc.sql.CommonDataSource.getProxy(CommonDataSource.java:85)
at net.sf.hajdbc.sql.DataSource.getConnection(DataSource.java:51)
If I still all 3 calls, my application calls getConnection() later on and I get the TimeoutException when calling getConnection().
Last edit: Justin Cranford 2014-02-04
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The java.lang.AbstractMethodError from DBCP is due to version incompatibility. HA-JBDC 3.0 requires that the objects it proxies implement the JDBC 4.0 API. e.g. Connection.isValid(). You need to use DBCP 1.4. http://commons.apache.org/proper/commons-dbcp/
As for the ISE seen when starting the JGroups layer, this seems to be due to multiple concurrent threads attempting to start your HA-JDBC cluster. Is this the case?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks. I am targeting Tomcat 7 and HA-JDBC 3.0, but I targeted HA-JDBC 3.0 first. I am still on Tomcat 5.5 until another developer works out our Tomcat 7 upgrade path. I will try upgrading Tomcat 5.5 to DBCP 1.4 as an intermediate step, and if that does not work I will switch to Tomcat 7 asap. Hopefully this will resolve all three of my run-time exceptions so I can move forward. Thanks so much for you help.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Awesome news. All 3 exceptions disappeared after I added these 2 jar files to my Tomcat 5.5 installation in $CATALINA_HOME/common/lib/.
commons-pool-1.6.jar
commons-dbcp-1.4.jar
Note, I had to make two code changes too:
Global replace "org.apache.tomcat.dbcp.dbcp" with "org.apache.commons.dbcp"
Remove HA-JDBC programmatic configuration calls to net.sf.hajdbc.sql.DataSource.setUser() and net.sf.hajdbc.sql.DataSource.setPassword().
The first code change was required to switch from the BasicDataSource implementation from Tomcat 5.5 DBCP 1.2 to the implementation from Apache Commons DBCP 1.4. Tomcat DBCP classes are just a copy of Apache Commons DBCP classes with renamed package names, so new Apache Commons DBCP can co-exist with old Tomcat DBCP in same classpath.
The second code change was required to fix a run-time exception during net.sf.hajdbc.DataSource.getConnection() call. The source code for DBCP 1.4.0 method org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1062) explicitly throws this java.lang.UnsupportedOperationException.
java.sql.SQLException: Not supported by BasicDataSource
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:51)
at net.sf.hajdbc.sql.SQLExceptionFactory.createException(SQLExceptionFactory.java:35)
at net.sf.hajdbc.AbstractExceptionFactory.createException(AbstractExceptionFactory.java:62)
at net.sf.hajdbc.util.reflect.Methods.invoke(Methods.java:53)
at net.sf.hajdbc.invocation.SimpleInvoker.invoke(SimpleInvoker.java:53)
at net.sf.hajdbc.invocation.AllResultsCollector$Invocation.call(AllResultsCollector.java:141)
at net.sf.hajdbc.util.concurrent.SynchronousExecutor$EagerFuture.<init>(SynchronousExecutor.java:406)
at net.sf.hajdbc.util.concurrent.SynchronousExecutor.invokeAll(SynchronousExecutor.java:175)
at net.sf.hajdbc.util.concurrent.SynchronousExecutor.invokeAll(SynchronousExecutor.java:152)
at net.sf.hajdbc.invocation.AllResultsCollector.collectResults(AllResultsCollector.java:83)
at net.sf.hajdbc.invocation.InvokeOnManyInvocationStrategy.invoke(InvokeOnManyInvocationStrategy.java:62)
at net.sf.hajdbc.invocation.InvocationStrategies.invoke(InvocationStrategies.java:35)
at net.sf.hajdbc.sql.AbstractInvocationHandler.invokeOnProxy(AbstractInvocationHandler.java:95)
at net.sf.hajdbc.sql.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:85)
at com.sun.proxy.$Proxy15.getConnection(Unknown Source)
at net.sf.hajdbc.sql.DataSource.getConnection(DataSource.java:51)
at com.mycompany.TransactionImpl.getDBConnection(TransactionImpl.java:254)
Caused by: java.lang.UnsupportedOperationException: Not supported by BasicDataSource
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1062)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at net.sf.hajdbc.util.reflect.Methods.invoke(Methods.java:49)
... 40 more
I looked at HA-JDBC 3.0 source code, and it calls this unsupported javax.sql.DataSource.getConnection(String,String) method at net.sf.hajdbc.sql.DataSource.getConnection(DataSource.java:51). However, it only calls the unsupported BasicDataSource.getConnection(String,String) method if-and-only-if net.sf.hajdbc.sql.DataSource.setUser() and net.sf.hajdbc.sql.DataSource.setPassword() were called.
Question: Should net.sf.hajdbc.sql.DataSource.getConnection() code be actually calling javax.sql.DataSource.getConnection(String,String)? That method is not supported in Apache Commons DBCP 1.4, but maybe it is not supported in other DataSource implementations too? Maybe add a warning message to HA-JDBC that setUser() and setPassword() may not be compatible with DataSource implementation BasicDataSource.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The purpose of storing the credentials in the DataSource is so that your application doesn't need to know about them. DBCP may not implement this, but most vendor DataSource implementations do. I'll add an FAQ entry for this.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The reason I access the credentials programmatically at run-time is to decrypt them based on current rolling key in our application. The application is an Identity Management product with its own FIPS-certified encryption engine, or HSM-based encryption from SafeNet aor Thales. HA-JDBC 3.0 has encryption, but I cannot use it based on the specific requirements of our product. Sorry for the confusion.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
net.sf.hajdbc.sql.DataSource.setTimeout();
net.sf.hajdbc.sql.DataSource.getConnection();
net.sf.hajdbc.sql.DataSource.setUser(); // but triggers HA-JDBC to call unsupport method
net.sf.hajdbc.sql.DataSource.setPassword(); // but triggers HA-JDBC to call unsupport method
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
How should I initialize the cluster at the end of my code? I would like to initialize and get its DatabaseClusterImpl object. That class seems to have the APIs I need to add listeners for synchronization, activation, and deactivation.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The HA-JDBC database cluster will auto-intialize the first time you try to create a connection (or perform any other methods that force creation of the javax.sql.DataSource proxy). Each of these methods internally invokes net.sf.hajdbc.sql.DataSource.getProxy(), which return the javax.sql.DataSource proxy. The InvocationHandler of this proxy will contain a reference to the DatabaseCluster.
e.g.
I finished coding and now running a simple test. I am comparing non-clustered BasicDataSource+MySQL run-time to clustered HAJDBC+BasicDataSource+MySQL. I am only wrapping that one MySQL database in HA-JDBC via my JNDI bound DataSourceDelegator.
Without HA-JDBC, MySQL connection count hits 8 and remains steady under heavy concurrent load. With HA-JDBC, it climbs without dropping until it reaches 500 configured max connections in my MySQL instance. MySQL then refuses further connections.
I am not sure how to isolate the bug. I put a break point in PoolableConnection.close() and connections seems to be returned to the pool. However, I see hundreds of "MySQL Statement Cancellation Timer" threads in my Tomcat JVM.
I am using latest MySQL Connector/J 5.1.28 driver. Any ideas how to proceed?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
How exactly does your application access the HA-JDBC DataSource? It sounds to me like your application might be creating the HA-JDBC DataSource multiple times. Is that possible?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am upgrading from 2.0.15 to 3.0.0 GA. The main reasons are for programmatic configuration of HA-JDBC, JDBC 4.1 support, and use of dump-restore sync strategy. My platform is Tomcat 7.0.50, Java 7u51 x32, and MySQL 5.6.
The example "Programmatic Configuration" section on http://ha-jdbc.github.io/doc.html uses Driver-based database members, and a Driver-based connection for HA-JDBC proxy connections. I switched the code to DataSource-based database members, but how to encapsulate them in a HA-JDBC proxy DataSource.
Basically, can you add another example to the Programmatic Configuration section to code a configuration like this, and put it in a JNDI DataSource entry with name "java:comp/env/jdbc/database".
<datasource id="database1" local="true" weight="1">
<name>java:comp/env/jdbc/database1</name>
</datasource>
<datasource id="database2" local="false" weight="0">
<name>java:comp/env/jdbc/database2</name>
</datasource>
Thanks!
Programmatically, this would look very similar. Instead of creating a DriverDatabase, you would create a DataSourceDatabase, where the location is the jndi name instead of the jdbc url. e.g.
These DataSource must already be bound to jndi. Alternatively, you could pass in a class name as the location, and HA-JDBC will construct the DataSource using the configured serverName, portNumber, and databaseName properties.
Next, instead of a DriverDatabaseClusterConfiguration, you would create a DataSourceDatabaseClusterConfiguration. e.g.
Lastly, create the HA-JDBC DataSource, associate it with the configuration, and, if you wish, bind it to jndi. e.g.
Thanks for your feedback.
I realized later the two simple mistakes I made converting the Driver-based Programmatic Configuration example to DataSource-based. Like your example shows, I had to 1) switch from static Driver.setConfiguration() invocation to non-static DataSource.setConfiguration(), and 2) switch "java" to "javax" in "java.sql.Driver" to "javax.sql.DataSource". They seem so simple now, but it tripped me up.
net.sf.hajdbc.sql.Driver setConfigurationFactory
net.sf.hajdbc.sql.Driver.setConfigurationFactory("mycluster", new SimpleDatabaseClusterConfigurationFactory<java.sql.Driver, DriverDatabase="">(config));
net.sf.hajdbc.sql.DataSource setConfigurationFactory
net.sf.hajdbc.sql.DataSource ds = new net.sf.hajdbc.sql.DataSource()
ds.setConfigurationFactory("mycluster", new SimpleDatabaseClusterConfigurationFactory<javax.sql.DataSource, DataSourceDatabase="">(config));
Once I got past the code error, I worked on this over the weekend and ended up trying the same InitialContext.bind(). It did not work because it throws a JNDI read-only exception. To workaround the run-time bind/rebind restriction, I implemented a simple DataSourceDelegatorFactory and DataSourceDelegator. This puts my standard Tomcat DBCP BasicDataSource object in a delegator wrapper object during JNDI lookup. I can then edit the delegate inside the JNDI bound DataSourceDelegator object to inject a net.sf.hajdbc.sql.DataSource object that I construct at run-time.
I put my code here in case other people might find it useful. If you think it is useful to add to HA-JDBC to workaround the InitialContext.bind read-only restriction, please feel free to use it or modify as you like.
server.xml
<Resource factory="com.mycompany.DataSourceDelegatorFactory" type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver" name="jdbc/mycluster" auth="Container" url="jdbc:mysql://localhost:3306/cspm" username="{enc2}9f95b8caecb15d31329d988af501230b" password="{enc2}9f95b8caecb15d31329d988af501230b"/>
DataSourceDelegator.java
// package declaration and imports
public class DataSourceDelegator implements javax.sql.DataSource {
private DataSource dataSourceDelegate = null;
public DataSourceDelegator(DataSource dataSourceDelegate) {this.dataSourceDelegate = dataSourceDelegate;}
public DataSource getDelegate() {return dataSourceDelegate;}
public void setDelegate(DataSource dataSourceDelegate) {this.dataSourceDelegate = dataSourceDelegate;}
// implement DataSource method which just pass invocation to dataSourceDelegate
}
DataSourceDelegatorFactory.java
// package declaration and imports
public class DataSourceDelegatorFactory implements javax.naming.spi.ObjectFactory {
public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable<?, ?> environment) throws Exception {
ObjectFactory dbcpDataSourceFactory = null;
try {
dbcpDataSourceFactory = (ObjectFactory) Class.forName(BasicDataSourceFactory.class.getCanonicalName()).newInstance();
DataSource basicDataSource = (DataSource) dbcpDataSourceFactory.getObjectInstance(obj, name, nameCtx, environment);
return new DataSourceDelegator(basicDataSource);
} catch (Exception e) {
NamingException ex = new NamingException("Could not create resource factory instance");
ex.initCause(e);
throw ex;
}
}
}
The difference is that the HA-JDBC driver is used for all HA-JDBC database clusters, whereas the DataSource is per cluster.
Out of curiosity, are you individual bases already using connection pooling? If so, you probably don't want to use DBCP for the HA-JDBC datasource. A connection pool of connection pools will only cause headaches...
I have to use DataSource-based clustering in HA-JDBC because I have to manage 3 data sources in my Tomcat deployment. I have two schemas in MySQL, and one schema has two sets of credentials.
MySQL schema #1 is used by a legacy LAMP application bridged to HA-JDBC via a MySQL-to-JDBC bridge. For MySQL bridging, I verified Tungsten Myosotis can be used. For other databases, I verified PHP-Java-Bridge transparently runs PHP in Tomcat, and it allows PHP code to invoke Java methods directly like JDBC APIs.
MySQL schema #2 is used by a legacy Java web application. There are separate credentials for DDL and DML, though. This is an Identity Management product, so separation of privileges is mandatory for DDL versus DML. The reason is most deployments only allow DDL access by DBAs, whereas other deployments allow our application to access both credentials. To be clear, DDL versus DML means two user accounts with mutually exclusion permissions - CREATE/DROP/UPDATE/ALTER versus SELECT/INSERT/UPDATE/DELETE/TRUNCATE.
My application requirements are irrelevant to HA-JDBC. All it means is I cannot use Driver-based clustering in HA-JDBC, only DataSource clustering.
So, the clustering I am trying to setup is one BasicDataSource per member (with pooling) wrapped in a HA-JDBC DataSource object (no pooling). I have to do this 3 times, one for each cluster. Programmatic configuration is ideal because it would be tedious and error prone to do the same thing via XML config.
Do you think my setup viable, or should I change my approach? I think it matches what you said, but with the added DataSourceDelegator holding the HA-JDBC DataSource reference to workaround read-only JNDI bindings in Tomcat.
Last edit: Justin Cranford 2014-02-03
Sounds like a viable setup to me. Let me know if you have any other issues.
I am having trouble getting my programmatically configured cluster started. I get an exception during programmatic setting when calling DataSourceDatabaseClusterConfiguration.setLoginTimeout(4).
Failed to set login timeout, exception: javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; systemId: file:/C:/tomcat/webapps/mywebapp/WEB-INF/classes/ha-jdbc-mycluster.xml; lineNumber: 7; columnNumber: 11; cvc-complex-type.2.4.b: The content of element 'ha-jdbc' is not complete. One of '{"urn:ha-jdbc:cluster:3.0":sync, "urn:ha-jdbc:cluster:3.0":state, "urn:ha-jdbc:cluster:3.0":lock, "urn:ha-jdbc:cluster:3.0":cluster}' is expected.]
Here is my HA-JDBC config. I am guessing I don't need one at all, but I cannot figure out how to programmatically configure HA-JDBC to use my custom JGroups tcp-sync config.
ha-jdbc-mycluster.xml
Here is code I am trying to use to programmatically configure my HA-JDBC DataSource...
I feel like I am getting closer, but lack of documented example is making this a slow exercise to figure out which HA-JDBC APIs to use. I am trying to reproduce what I had with HA-JDBC 2.0 configuration as a starting point, so your help would be much appreciated.
Last edit: Justin Cranford 2014-02-04
I can't see your xml configuration, so I can't tell what's wrong with your xml. Make sure you indent any code blocks in your posts so that they get parsed correctly via markdown.
The DataSource.setConfig(...) and DataSource.setConfigurationFactory(...) methods are mutually exclusive. The setConfig(...) method is actually just shorthand for the XMLDataSourceClusterConfigurationFactory. Currently, the config property takes precedence over the configurationFactory property - which is why HA-JDBC is trying to parse your xml file instead of using your programmatic configuration.
To programmatically define a custom JGroups configuration, use the following when defining your DataSourceDatabaseClusterConfiguration:
Also, I should have pointed out the following test case, which is as good an example as any of using programmatic configuration:
https://github.com/ha-jdbc/ha-jdbc/blob/3.0/src/test/java/net/sf/hajdbc/SimpleTest.java
Last edit: Paul Ferraro 2014-02-03
I edited my post to indent my config and code. Thanks for the tips.
I also had to remove setCluster() call because setLoginTimeout() also triggered looking for ha-jdbc-{0}.xml. My cluster initializes now, but with "null" as the cluster name in these jGroups startup message.
GMS: address=hostname-48453, cluster=null.lock, physical address=fe80:0:0:0:2495:acaf:fc2:2997%13:56183
GMS: address=hostname-5867, cluster=null.state, physical address=fe80:0:0:0:2495:acaf:fc2:2997%13:56183
Finally, my cluster is not totally working yet. Calls to get a connection timeout.
OK - I see what's happening. There are 2 types of methods on the HA-JDBC DataSource:
1. Those relating to initial configuration
2. Proxied methods that delegate to the underlying DataSource implementation
Essentially, the methods on javax.sql.DataSource should not be called until the HA-JDBC DataSource is fully configured. The HA-JDBC Datasource also needs a cluster name. (that explains the null in your JGroups cluster name). e.g.
Admittedly, this is all a little awkward. It was designed with JNDI in mind, which requires a no-arg constructor. There is an ObjectFactory implementation, but it's cumbersome to use programmatically. I should probably provide a normal factory from which you can create the DataSource proxy. That way you can do something like:
Thoughts?
OK. I think I am closer to the real root exception. Re-arranging the calls and adding setCluster("mycluster") fixes the log message, but it does not resolve "Caused by: java.util.concurrent.TimeoutException".
That exception happens further down the line in my application, so I tried adding a getConnection() immediately after setting up HA-JDBC. I am getting this JGroups exception during HA-JDBC initialization triggered by getConnection() or even setLoginTimeout().
Here is the code snippet I use to initialize JGroups. I am pointing it at the udp.xml inside the HA-JDBC jar file for now, until I can figure out how to upgrade my previous HA-JDBC 2.0 / JGroups 2.6 configuration into HA-JDBC 3.0.
Removed creation of JGroupsCommandDispatcherFactory() and the call to setDispatcherFactory() does not resolve the problem. Now JGroups does not initialize, but I still get the java.util.concurrent.TimeoutException further down the line.
Did you read this post? https://sourceforge.net/p/ha-jdbc/discussion/383397/thread/6f9f1639/#6cc2
CORRECTION:
I looked at the post. These methods trigger HA-JDBC start:
The first two methods start HA-JDBC and trigger this exception from JGroups:
The getConnections() method triggers a different exception:
If I still all 3 calls, my application calls getConnection() later on and I get the TimeoutException when calling getConnection().
Last edit: Justin Cranford 2014-02-04
Corrected previous post. The call to getConnection() triggers some other exception unrelated to JGroups.
The java.lang.AbstractMethodError from DBCP is due to version incompatibility. HA-JBDC 3.0 requires that the objects it proxies implement the JDBC 4.0 API. e.g. Connection.isValid(). You need to use DBCP 1.4.
http://commons.apache.org/proper/commons-dbcp/
As for the ISE seen when starting the JGroups layer, this seems to be due to multiple concurrent threads attempting to start your HA-JDBC cluster. Is this the case?
Thanks. I am targeting Tomcat 7 and HA-JDBC 3.0, but I targeted HA-JDBC 3.0 first. I am still on Tomcat 5.5 until another developer works out our Tomcat 7 upgrade path. I will try upgrading Tomcat 5.5 to DBCP 1.4 as an intermediate step, and if that does not work I will switch to Tomcat 7 asap. Hopefully this will resolve all three of my run-time exceptions so I can move forward. Thanks so much for you help.
Awesome news. All 3 exceptions disappeared after I added these 2 jar files to my Tomcat 5.5 installation in $CATALINA_HOME/common/lib/.
Note, I had to make two code changes too:
The first code change was required to switch from the BasicDataSource implementation from Tomcat 5.5 DBCP 1.2 to the implementation from Apache Commons DBCP 1.4. Tomcat DBCP classes are just a copy of Apache Commons DBCP classes with renamed package names, so new Apache Commons DBCP can co-exist with old Tomcat DBCP in same classpath.
The second code change was required to fix a run-time exception during net.sf.hajdbc.DataSource.getConnection() call. The source code for DBCP 1.4.0 method org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1062) explicitly throws this java.lang.UnsupportedOperationException.
I looked at HA-JDBC 3.0 source code, and it calls this unsupported javax.sql.DataSource.getConnection(String,String) method at net.sf.hajdbc.sql.DataSource.getConnection(DataSource.java:51). However, it only calls the unsupported BasicDataSource.getConnection(String,String) method if-and-only-if net.sf.hajdbc.sql.DataSource.setUser() and net.sf.hajdbc.sql.DataSource.setPassword() were called.
Question: Should net.sf.hajdbc.sql.DataSource.getConnection() code be actually calling javax.sql.DataSource.getConnection(String,String)? That method is not supported in Apache Commons DBCP 1.4, but maybe it is not supported in other DataSource implementations too? Maybe add a warning message to HA-JDBC that setUser() and setPassword() may not be compatible with DataSource implementation BasicDataSource.
The purpose of storing the credentials in the DataSource is so that your application doesn't need to know about them. DBCP may not implement this, but most vendor DataSource implementations do. I'll add an FAQ entry for this.
The reason I access the credentials programmatically at run-time is to decrypt them based on current rolling key in our application. The application is an Identity Management product with its own FIPS-certified encryption engine, or HSM-based encryption from SafeNet aor Thales. HA-JDBC 3.0 has encryption, but I cannot use it based on the specific requirements of our product. Sorry for the confusion.
I forgot to mention BasicDataSource.getLoginTimeout() also throws "java.lang.UnsupportedOperationException: Not supported by BasicDataSource".
net.sf.hajdbc.sql.DataSource.setLoginTimeout();
net.sf.hajdbc.sql.DataSource.getConnection(String,String);
net.sf.hajdbc.sql.DataSource.setTimeout();
net.sf.hajdbc.sql.DataSource.getConnection();
net.sf.hajdbc.sql.DataSource.setUser(); // but triggers HA-JDBC to call unsupport method
net.sf.hajdbc.sql.DataSource.setPassword(); // but triggers HA-JDBC to call unsupport method
How should I initialize the cluster at the end of my code? I would like to initialize and get its DatabaseClusterImpl object. That class seems to have the APIs I need to add listeners for synchronization, activation, and deactivation.
The HA-JDBC database cluster will auto-intialize the first time you try to create a connection (or perform any other methods that force creation of the javax.sql.DataSource proxy). Each of these methods internally invokes net.sf.hajdbc.sql.DataSource.getProxy(), which return the javax.sql.DataSource proxy. The InvocationHandler of this proxy will contain a reference to the DatabaseCluster.
e.g.
Alternatively, you can use JMX to register your event listeners.
You also want to make sure that you properly shutdown HA-JDBC in a shutdown hook somewhere in your code. e.g.
Last edit: Paul Ferraro 2014-02-05
I finished coding and now running a simple test. I am comparing non-clustered BasicDataSource+MySQL run-time to clustered HAJDBC+BasicDataSource+MySQL. I am only wrapping that one MySQL database in HA-JDBC via my JNDI bound DataSourceDelegator.
Without HA-JDBC, MySQL connection count hits 8 and remains steady under heavy concurrent load. With HA-JDBC, it climbs without dropping until it reaches 500 configured max connections in my MySQL instance. MySQL then refuses further connections.
I am not sure how to isolate the bug. I put a break point in PoolableConnection.close() and connections seems to be returned to the pool. However, I see hundreds of "MySQL Statement Cancellation Timer" threads in my Tomcat JVM.
I am using latest MySQL Connector/J 5.1.28 driver. Any ideas how to proceed?
How exactly does your application access the HA-JDBC DataSource? It sounds to me like your application might be creating the HA-JDBC DataSource multiple times. Is that possible?