#6 c3p0 aggressively tries to create connections.

closed
nobody
None
5
2006-02-13
2005-02-07
Adam
No

Sometimes c3p0 appears to struggle acquiring new
connections and then aggresively retries. This causes
the connections on the database to rise above 100, even
thought the maximum set in the hibernate props is 35.

Here are my hibernate properties:
<property name="hibernate.dialect"
value="net.sf.hibernate.dialect.OracleDialect"/>
<property
name="hibernate.connection.provider_class"
value="net.sf.hibernate.connection.C3P0ConnectionProvider"/>
<property name="hibernate.c3p0.max_size" value="35"/>
<property name="hibernate.c3p0.min_size" value="5"/>
<property name="hibernate.c3p0.timeout" value="5"/>
<property name="hibernate.c3p0.max_statements"
value="0"/>

The errors occure about ever 5 seconds (coninciding
with the timeout property i persume).

Doing - select machine, count (machine) from v$session
group by machine, i would get 111 and rising one
connection every error that occurs.

Below is a stack trace the of error.

java.sql.SQLException: Io exception: Connection
refused(DESCRIPTION=(TMP=)(VSNNUM=153092352)(ERR=12519)(ERROR_STACK=(ERROR=(CODE=12519)(EMFI=4))))
at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at
oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:333)
at
oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:404)
at
oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:468)
at
oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:314)
at
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:68)
at
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:87)
at
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1.acquireResource(C3P0PooledConnectionPool.java:83)
at
com.mchange.v2.resourcepool.BasicResourcePool.assimilateResource(BasicResourcePool.java:886)
at
com.mchange.v2.resourcepool.BasicResourcePool.acquireUntil(BasicResourcePool.java:603)
at
com.mchange.v2.resourcepool.BasicResourcePool.access$400(BasicResourcePool.java:31)
at
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1071)
at
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:354)

Discussion

  • Adam
    Adam
    2005-02-07

    Logged In: YES
    user_id=1177084

    I should note that c3p0-0.8.5-pre9.jar is being used here.
    Thanks

     
  • Steve Waldman
    Steve Waldman
    2005-02-07

    Logged In: YES
    user_id=175530

    Adam,

    Can you try a few things for me?

    1) Post c3p0's config dump, which should appear wherever standard
    error goes when your c3p0 pool is initialized.

    2) Change your "hibernate.c3p0.timeout" to something much longer, and
    see how that affects the problem. Does the problem go away with a
    longer expiration time? (In general, expiration times can be quite long,
    and are intended to be much longer than 5 seconds, up to several hours
    really. But some people like to use short expirations to keep the number
    of open Connections in the pool at the minimum possible given the
    current usage level. I think 5 seconds is too short even for this. It's your
    thang, and I'd like it to work however aggressive you want to be, but
    depending on your setup, it's not inconceivable that the first Connection
    has already expired by the time you set up the minimum 5 in the pool, in
    which case Connection acquisition would be continuous. A 60 second
    expiration time amounts to a pretty aggressive reclamation of
    resources.)

    3) Create a c3p0.properties file, and set the parameter
    c3p0.numHelperThreads to a value much higher than its default of 3 (try
    15). Does the problem go away? Even with your 5 second timeout, with
    enough helper threads, the problem may disappear.

    4) If you are seeing any other c3p0-related messages or stack-traces,
    please send them along.

    My hypothesis is this: c3p0 destroys Connections asynchronously, passing
    the work of destroying the Connection to a pool of helper threads. Once
    the destruction is ordered, the pool considers the Connections gone. You
    are churning through a lot of Connections -- creating then destroying
    them, and all of this happens asynchronously and in a manner that is not
    guaranteed to be in order. Further, Connection acquisition is both slow
    and "batched", and multiple Connection acquisition tasks can be posted,
    so it's possible for all the helper threads to get busy acquiring
    Connections, and for destroy tasks only occasionally to get through. In
    other words, even though acquisition is slower than destruction, it's
    possible for Connection acquistion to occur faster than destruction, and
    for Connections to be acquired to replace purged Connections that have
    not as yet been destroyed. All of this is only possible in the rare situation
    where the Connection expiration you set is of the same order of
    magnitude as the duration of Connection acquisition. Yours is.

    If I'm right about this, longer expiration times and more helper threads
    should both help. I can fix this, and I will if we can verify the issue, just
    to make c3p0 as bulletproof as I can. But still, I recommend a much
    longer timeout.

    smiles,
    Steve

     
  • Adam
    Adam
    2005-02-07

    Logged In: YES
    user_id=1177084

    Thank your for u detailed explanation and quick reply.

    I have reconfigured my c3p0 settings to use a 60s timeout
    period and will let u know of the results tomorrow.

    Thanks
    Adam

    PS.
    Here is the c3p0 config dump:
    [deployer] - Initializing connection provider:
    net.sf.hibernate.connection.C3P0ConnectionProvider
    [deployer] - C3P0 using driver: oracle.jdbc.OracleDriver at
    ------------------------------
    [deployer] - No TransactionManagerLookup configured (in JTA
    environment, use of process level read-write cache is not
    recommended)
    Initializing c3p0 pool...
    com.mchange.v2.c3p0.PoolBackedDataSource@1db9f45 [
    connectionPoolDataSource ->
    com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@1508f31
    [ acquireIncrement -> 1, acquireRetryAttempts -> 30,
    acquireRetryDelay -> 1000, autoCommitOnClose -> false,
    automaticTestTable -> null, breakAfterAcquireFailure ->
    false, checkoutTimeout -> 0, connectionTesterClassName ->
    com.mchange.v2.c3p0.impl.DefaultConnectionTester,
    factoryClassLocation -> null,
    forceIgnoreUnresolvedTransactions -> false,
    idleConnectionTestPeriod -> 0, initialPoolSize -> 5,
    maxIdleTime -> 5, maxPoolSize -> 35, maxStatements -> 0,
    maxStatementsPerConnection -> 0, minPoolSize -> 5,
    nestedDataSource ->
    com.mchange.v2.c3p0.DriverManagerDataSource@413fc6 [
    description -> null, driverClass -> null,
    factoryClassLocation -> null, jdbcUrl ->
    ------------------------------------, properties ->
    {user=******, password=******} ] , preferredTestQuery ->
    null, propertyCycle -> 300, testConnectionOnCheckin ->
    false, testConnectionOnCheckout -> false,
    usesTraditionalReflectiveProxies -> false ] ,
    factoryClassLocation -> null, numHelperThreads -> 3,
    poolOwnerIdentityToken -> 1db9f45 ]

     
  • Logged In: NO

    After setting the timeout property to 90 seconds...

    <property name="hibernate.c3p0.timeout" value="90"/>

    The errors still occur intermittently. The database in
    question is quite busy. Im not sure if this matters.

    I will contine to increase the timeout in intervals of 30s
    until the issue goes away.

    Thanks.

     
  • Steve Waldman
    Steve Waldman
    2005-03-03

    Logged In: YES
    user_id=175530

    Don't forget to also try increasing the number of helper threads (you'll have
    to do this in a c3p0.properties file, by setting c3p0.numHelperThreads=6 or
    something). But it sounds like in general you are seeing the expected
    behavior -- increasing hibernate.c3p0.timeout diminishes the likelihood of
    your seeing the problem, and with a more usual timeout (>=300) you don't
    see the problem at all? I'm guessing the problem would diminish with a
    less busy server, and with a lower maximum pool size as well, but I think
    we ought to presume these variables are constant and or growing. I'll try to
    address this by "fragmenting" c3p0's Connection acquisition tasks, so that
    Connection acquisition batches are less likely to saturate c3p0's thread
    pool a fast Connection timeout. In the meantime, I hope that the
    combination of a longer time to expiration and more threads in the thread
    pool is okay as a workaround.

    smiles,
    Steve

     
  • Steve Waldman
    Steve Waldman
    2006-02-13

    Logged In: YES
    user_id=175530

    c3p0-0.9.1 "fragments" Connection acquisitions, which should make this kind of
    churning less likely. first public prerelease in a few days, write me if you want to
    test now. hopefully backing of the aggressive timeout/maxIdleTime has been
    enough in the meantime. hopefully the new version is more robust to this kind
    of thing. sorry for the long delay!

     
  • Steve Waldman
    Steve Waldman
    2006-02-13

    • status: open --> closed