Hi,
We've migrated a legacy app from using hardcoded DB2 drivers to c3p0, however we're getting intermittent failures as below :
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-954, SQLSTATE=57011, SQLERRMC=null, DRIVER=4.14.113 at com.ibm.db2.jcc.am.ed.a(ed.java:682) at com.ibm.db2.jcc.am.ed.a(ed.java:60) at com.ibm.db2.jcc.am.ed.a(ed.java:127) at com.ibm.db2.jcc.am.ResultSet.completeSqlca(ResultSet.java:4101) at com.ibm.db2.jcc.am.ResultSet.earlyCloseComplete(ResultSet.java:4083) at com.ibm.db2.jcc.t4.ab.a(ab.java:835) at com.ibm.db2.jcc.t4.ab.n(ab.java:801) at com.ibm.db2.jcc.t4.ab.j(ab.java:253) at com.ibm.db2.jcc.t4.ab.d(ab.java:55) at com.ibm.db2.jcc.t4.p.c(p.java:44) at com.ibm.db2.jcc.t4.qb.j(qb.java:147) at com.ibm.db2.jcc.am.oo.kb(oo.java:2158) at com.ibm.db2.jcc.am.po.b(po.java:4482) at com.ibm.db2.jcc.am.po.hc(po.java:756) at com.ibm.db2.jcc.am.po.executeQuery(po.java:725) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:76) at com.myapp.shared.db.DBDynamicConnection.executeQuery(DBDynamicConnection.java:177)
The description of the error is here :
http://www-01.ibm.com/support/docview.wss?uid=swg21597038
It's for IBM Tivoli, but it's a DB2 error. The description there indicates it's a problem with large querys, but when it occurs it causes the connection to not process any query (simple select, update etc.) and applying fix specified to increase the application heap doesn't seem to have helped.
Now don't quote me but I think it's just a single connection, not all in the pool. It's a bit hard to tell as there's multiple threads making calls to multiple databases, but the symptom is some bits work, others fail. The only way to fix it seems to be a stop and start of tomcat - even forcing a close of all the connections and c3p0 then starting it up again doesn't fix it.
I'm also not sure this is a c3p0 issue specifically, but this didn't used to happen on our old code and I'm hoping someone with a clue will be able to point me in the right direction!
Further strangeness is that this exact code has been running in QA for 3 months without issue, this is happening only in production (where the transaction volume is much greater).
It seems to me like something, somewhere, is filling up and restarting tomcat makes it all better. But what, and where, and how to avoid it?
The connections are created as such :
cpds = new ComboPooledDataSource(); cpds.setJdbcUrl(url); cpds.setUser(username); cpds.setPassword(password); cpds.setMinPoolSize(2); cpds.setMaxPoolSize(4); cpds.setAcquireIncrement(1); cpds.setBreakAfterAcquireFailure(true); //Connection testing cpds.setPreferredTestQuery("VALUES (CURRENT TIMESTAMP)"); cpds.setIdleConnectionTestPeriod(60); cpds.setTestConnectionOnCheckin(true); //Force commit to make sure we never leave transactions hanging cpds.setAutoCommitOnClose(true);
We're running 0.9.1.2.
Anyone any ideas? :)
hi, the bug you report suggests a problem in the heap at the database side, but the symptoms you report sound as if the Java VM is in bad shape. perhaps the DB2 JDBC driver is reporting the same error for problems at the Java heap?
do you hot redeploy a lot? Tomcat's odd classloading scheme and c3p0's helper threads interact poorly, leading sometimes to memory leaks on hot redeploy. perhaps you see this after hot redeploys have eaten your JVM's heap. the latest development snapshot of c3p0-0.9.5 includes some new config params to workaround tomcat redeploy memory leaks, but that's probably too bleeding edge for production use. (you probably should upgrade to c3p0-0.9.2.1 though.) you might see if increasing the heap size on your JVM and/or reducing or eliminating hot redeploys of your webapp reduce the frequency of this problem.
Settings have been added that seem able to workaround Tomcat classloading issues. Please see http://www.mchange.com/projects/c3p0/#configuring_to_avoid_memory_leaks_on_redeploy http://www.mchange.com/projects/c3p0/#tomcat-specific
Thanks!
Will update and apply the settings to see if it helps.
Kristan
On 4 December 2013 07:23, Steve Waldman swaldman@users.sf.net wrote:
Related
Bugs:
#118Just to follow up, the changes have been in production for a couple of weeks now and we've not had any issues.
Thanks for the fix, it's much appreciated!
great! thank you for letting me know.