Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

#1 ConnectionPool exhausts

closed
nobody
None
5
2005-02-07
2004-09-15
ManfredHutt
No

Hello,
I'm using Hibernate 2.1 together with c3p0 0.8.4.5 and a
mySql database.
In my application I have a server running some 100
threads from a pool, each serving a client, several 100
clients at a time.
c3p0 is configured with 50 initial connection, limit of 100,
increment of 1. MySql is limited to 100 connections.
Sometimes (not on all runs) I observed that mySql
rejected new connection with the following exception:

java.sql.SQLException: Data source rejected
establishment of connection, message from server: "Too
many connections"

When checking in more detail I found that in the
postAcquireMore of BasicResourcePool the number of
managed resources grew much slower than the number
of pending_acquires and num_desired.
This lets me come to the conclusion that more and more
worker threads become active much faster than a new
conenction is established and available.
Is this a correct observation?

Another point that I do not understand: Why do I see no
release of connections coming back to the pool? This
should help fulfilling all requirements. Did you observe
such a behaviour before? And how can I resolve it?

Thanks

Manfred Hutt

Discussion

  • Steve Waldman
    Steve Waldman
    2004-12-01

    Logged In: YES
    user_id=175530

    Manfred,

    1) Sorry this is so late.... I didn't have this "tracker" set up to e-mail me
    on adds, and yours is the first (of now several) support requests placed
    here on sourceforge. (Usually I've handled support requests by e-mail, or
    on the list c3p0-users@lists.sourceforge.net. Anyway, I should get e-
    mails now from support submissions here.)

    2) pending_acquires and num_desired should grow much more quickly
    than the number of actual connections acquired. These get incremented
    immediately upon a Connection request, while Connection acquisition is
    slow, and depends on the database, network I/O etc.

    3) The number of worker threads is fixed, not determined by Connection
    acquistion requests. You can determine the number of helper threads
    with the parameter numHelperThreads (see c3p0 docs for how to set
    parameters).

    4) if you're not seeing Connections returned to the pool, are you sure
    they are being reliably closed in a finally block? They most certainly
    should be returned to the pool. The only condition under which they
    would not come back is if an error occurred on close or while the
    Connection was in-use that caused c3p0 to purge the Connection from the
    pool. if somehow you are closing c3p0 generated Connections, no errors
    are occurring, and they are not re-entering the pool, that would be a
    very serious c3p0 bug. if you think this is the case, let me know about it
    and how I might try to reproduce what you're doing.

    5) if you expect to have a lot of Connections open, try setting an
    acquireIncrement greater than 1. it's usually counterproductive to force
    the pool to go through a fresh Connection acquisition each time a request
    comes in beyond the number of Connection's currently pooled.

    Sorry again for the long delay!

    Steve

     
  • Steve Waldman
    Steve Waldman
    2005-02-07

    • status: open --> closed
     
  • Logged In: NO

    We also have and issue with an exhausted connection pool. It
    will keep 19 in the pool. I setup a test to burn through
    them, ensuring that they are closed in a finally block. It
    blows out after #19 every time. All I did was have it run
    through a loop with a simple select statement.

     
  • Steve Waldman
    Steve Waldman
    2005-05-24

    Logged In: YES
    user_id=175530

    anonymous commenter -- if you want any help with this,
    you'll have to provide a lot more information. most users do
    not see the behavior you describe, and there is nothing here
    that would help anyone to help you figure out the problem.
    good luck!