The 1.0.0 Windows package is compiled against MySQL-4.0, and I think there is no realistic possibility that it will work with a 4.1 server due to some API and protocol changes. Try compiling your own 1.1.7 version against MySQL-4.1 and see if that works.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If it involves compilation on win32 then I'm stuck. :( I have no tools to compile these files on win32 I think.. Thought switching to 4.0 mysql would help but it did not. Well, I'll try finding suitably compiled versions, thanks anyway.
Yuri.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I've compiled _mysql.c against 4.1.7 installation, but problem did not go away. Well, it changed a bit. When running openload now, it seems to hang, i.e. gives no result, while apache continues to werk ok. Then I ran two parallel wgets in cycle, and after a few seconds Apache crashed like before.
Could you possibly suggest where problem may be, out of your programming experience?
It crashes precisely on cursor.execute(), and since I use a global, the same connection is shared by multiple requests.
May be mysqlclient.lib or mysql-python have some sort of debug logging?
Thanks for any help...
Yuri
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Otherwise it will open a new connection each time and they won't be recycled.
You should also make sure you do a db.commit() or db.rollback() before you return the connection to the pool to ensure that it is in an initial state. In your example, you are only doing a SELECT, so you should only need to do a db.rollback() to reset the transaction. If you don't use transaction-safe tables, this probably doesn't matter.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for you Poll module, I've enjoyed it much. However, I faced a problem while using it. If I run 50 concurrent connections, apache (or whatever part of it) falls into a strange state. There are 2 threads in "W" state, according to /server-status information, which do not die. Further requests to .py files make mod_python work, but no content is ever sent to the client browser. Also apache cannot exit gracefully and has to force termination of its child process.
If I use simple create/close of db connections, all runs smoothly.
Do you have any ideas on it? I would be grateful for any tips.
Cheers,
Yuri
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Pool, by default, uses an internal Queue of size 5. This means if there are already 5 connections in the Pool and you try to put another one, it simply throws it away. It may be that you need to increase this limit to match the load on your server. I could also be that there is a connection limit on your MySQL server, or possibly Apache.
Note that I do not have experience with either mod_python or Windows, so I can't offer much help here.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
just one Question to your pool-class. When i have 5 connections and all are in use and another requset is issued, then a new connection will be made cause the pool is empty, so there will be 6 connections open ... am i right? So when all are returned, the last one would block until someone calls .get() again and a slot is free ...
In order to restrict the number of connections, the Queue must count the number of in-use connections ...
Greetings ...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The purpose of Pool is not to provide an upper limit to the total number of connections. Rather it's to to ensure that you have one available when you need one, and impose an upper limit on idle connections.
Consider this scenario: The Pool is initially empty, and then you have 100 requests come in all about the same time. It will end up spawning 100 connections. As they are released, they are returned to the Pool (this has to be done by the application; if it fails to do this, then the Pool is nothing but a Factory, but otherwise there are no ill effects, so long as the application deletes it). The Pool will quickly fill up (default 5), and any new connections returned to the Pool will simply be discarded.
Another scenario: You typically have 10 concurrent requests, which results in 10 open connections. Assuming the Pool is empty, you can then have as few as 5 concurrent requests without throwing away excess connections.
How to determine the best Pool size: Well, ya got me there. 5 seemed pretty reasonable. If your goal is to avoid opening and closing connections unnecessarily (which is what Pool was intended for), then the Pool size should be about equal to the difference between the maximum and minimum concurrency you expect to see.
The Queue part of the Pool ensures that you always get the least recently used object in the Pool. You should keep in mind that connections can time out and may need to be re-established. For MySQLdb connections, you can use the ping() method when you pull them out of the Pool which will check the connection status and reconnect if necessary. Also, you should make sure you commit() or rollback() before returning a connection to the Pool to close out any pending transactions.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello.
I'm running win32 apache 2.0.52, python 2.3, MySQL-python-1.0.0.win32-py2.3.zip and MySQL 4.1.7, all pre-compiled.
When I use openload on my test.py script (see below), apache simply crashes, but all's ok with consequent requests.
As I understand it has something to do with thread-safety somewhere.. I would be grateful if somebody could explain how to avoid this problem.
Thanks,
Yuri.
<pre>
import em, StringIO
import MySQLdb
from mod_python import apache
def index(req):
res = None
try:
cursor = _getconn().cursor()
cursor.execute("SELECT * FROM sitenode")
res = cursor.fetchone()
finally:
cursor.close()
def _getconn():
global _conn
try:
return _conn
except NameError:
_conn = MySQLdb.connect(host="localhost", user="t1",\ passwd="t1", db="t1")
return _conn
</pre>
The 1.0.0 Windows package is compiled against MySQL-4.0, and I think there is no realistic possibility that it will work with a 4.1 server due to some API and protocol changes. Try compiling your own 1.1.7 version against MySQL-4.1 and see if that works.
If it involves compilation on win32 then I'm stuck. :( I have no tools to compile these files on win32 I think.. Thought switching to 4.0 mysql would help but it did not. Well, I'll try finding suitably compiled versions, thanks anyway.
Yuri.
Andy,
I've compiled _mysql.c against 4.1.7 installation, but problem did not go away. Well, it changed a bit. When running openload now, it seems to hang, i.e. gives no result, while apache continues to werk ok. Then I ran two parallel wgets in cycle, and after a few seconds Apache crashed like before.
Could you possibly suggest where problem may be, out of your programming experience?
It crashes precisely on cursor.execute(), and since I use a global, the same connection is shared by multiple requests.
May be mysqlclient.lib or mysql-python have some sort of debug logging?
Thanks for any help...
Yuri
After looking at your code, I think what is happening is you are trying to share a connection between threads, which is bad news. You have this:
def _getconn():
global _conn
try:
return _conn
except NameError:
_conn = MySQLdb.connect(host="localhost", user="t1", passwd="t1", db="t1")
return _conn
First, get my Pool module from http://dustman.net/andy/python/Pool
Then use it like this:
from Pool import Pool, Constructor
conn_pool = Pool(Constructor(MySQLdb.connect, host="localhost", user="t1", passwd="t1", db="t1"))
Then to get a connection:
global conn_pool
db=conn_pool.get()
When you are done with it, be sure to do this:
conn_pool.put(db)
Otherwise it will open a new connection each time and they won't be recycled.
You should also make sure you do a db.commit() or db.rollback() before you return the connection to the pool to ensure that it is in an initial state. In your example, you are only doing a SELECT, so you should only need to do a db.rollback() to reset the transaction. If you don't use transaction-safe tables, this probably doesn't matter.
Andy,
Thanks for you Poll module, I've enjoyed it much. However, I faced a problem while using it. If I run 50 concurrent connections, apache (or whatever part of it) falls into a strange state. There are 2 threads in "W" state, according to /server-status information, which do not die. Further requests to .py files make mod_python work, but no content is ever sent to the client browser. Also apache cannot exit gracefully and has to force termination of its child process.
If I use simple create/close of db connections, all runs smoothly.
Do you have any ideas on it? I would be grateful for any tips.
Cheers,
Yuri
Pool, by default, uses an internal Queue of size 5. This means if there are already 5 connections in the Pool and you try to put another one, it simply throws it away. It may be that you need to increase this limit to match the load on your server. I could also be that there is a connection limit on your MySQL server, or possibly Apache.
Note that I do not have experience with either mod_python or Windows, so I can't offer much help here.
Hi,
just one Question to your pool-class. When i have 5 connections and all are in use and another requset is issued, then a new connection will be made cause the pool is empty, so there will be 6 connections open ... am i right? So when all are returned, the last one would block until someone calls .get() again and a slot is free ...
In order to restrict the number of connections, the Queue must count the number of in-use connections ...
Greetings ...
The purpose of Pool is not to provide an upper limit to the total number of connections. Rather it's to to ensure that you have one available when you need one, and impose an upper limit on idle connections.
Consider this scenario: The Pool is initially empty, and then you have 100 requests come in all about the same time. It will end up spawning 100 connections. As they are released, they are returned to the Pool (this has to be done by the application; if it fails to do this, then the Pool is nothing but a Factory, but otherwise there are no ill effects, so long as the application deletes it). The Pool will quickly fill up (default 5), and any new connections returned to the Pool will simply be discarded.
Another scenario: You typically have 10 concurrent requests, which results in 10 open connections. Assuming the Pool is empty, you can then have as few as 5 concurrent requests without throwing away excess connections.
How to determine the best Pool size: Well, ya got me there. 5 seemed pretty reasonable. If your goal is to avoid opening and closing connections unnecessarily (which is what Pool was intended for), then the Pool size should be about equal to the difference between the maximum and minimum concurrency you expect to see.
The Queue part of the Pool ensures that you always get the least recently used object in the Pool. You should keep in mind that connections can time out and may need to be re-established. For MySQLdb connections, you can use the ping() method when you pull them out of the Pool which will check the connection status and reconnect if necessary. Also, you should make sure you commit() or rollback() before returning a connection to the Pool to close out any pending transactions.
Thanks Andy, it works like clock!