When I create a Cursor object, should I use it for a short time and then dispose of it, getting a new one for the next operation, or is it okay to keep cursors around for a long time?
(a long time could be the uptime of a Zope server, for example ;-)
cheers,
Chris
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Zope (ZMySQLDA) does not use MySQLdb but rather _mysql which has no notion of cursors. Even in MySQLdb, Cursors are implemented in Python plus a _mysql.MYSQL_RESULT instance. The normal Cursor class disposes of this object immediately after the query, storing the result in a tuple. MYSQL_RESULT instances are good for only a single query, since they are a result set.
So for Zope, your question is irrelevant. In general, however, the answer is: It's safe to keep cursors around for a long time.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Well, I'm using MySQLdb in Zope rather ZMySQLDA, for some good reasons...
Anyway, it's good to know that. I was asking 'cos I'm trying o figure out why MySQL keeps hanging on my Win2K box after I've indexed about 10,000K docs. the number of docs indexed seems to depend on the table_cache variable, so if you know of anything in MySQLdb that could cause this, I'd be happy to hear about it :-)
Other than the fact that Cursor objects contain every row of the result set until you delete them or issue a new query, there's no reason not to keep them around forever. SSCursors do not have this problem, but have an entirely different problem: All rows of the result set must be read before another query can be issued on the connection. Failure to observe this rule will cause bad things to happen. Both of these are consequences of MySQL's design.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
SSCursor is a Cursor which uses mysql_use_result() to get the result set. In this case, rows are fetched from the server on demand, rather than all at once, as with mysql_store_result(), used by the standard Cursor.
And yes, with a normal cursor, each execute() throws away the last result set. However, since the entire result set is immediately stored within the Cursor as a tuple of tuples, you can perform fetchall() immediately after execute() and this will return the actual tuple of tuples, not a slice of it. fetchall() looks like this:
def fetchall(self):
"""Fetchs all available rows from the cursor."""
self._check_executed()
result = self.rownumber and self._rows[self.rownumber:] or self._rows
self.rownumber = len(self._rows)
return result
It does return a slice if any rows have been read with fetchone() or fetchmany(). Obviously if no rows have been previously read, fetchall() is very fast, but of course the heavy lifting has already been done by mysql_store_result() (called from within _mysql.query()).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
When I create a Cursor object, should I use it for a short time and then dispose of it, getting a new one for the next operation, or is it okay to keep cursors around for a long time?
(a long time could be the uptime of a Zope server, for example ;-)
cheers,
Chris
Zope (ZMySQLDA) does not use MySQLdb but rather _mysql which has no notion of cursors. Even in MySQLdb, Cursors are implemented in Python plus a _mysql.MYSQL_RESULT instance. The normal Cursor class disposes of this object immediately after the query, storing the result in a tuple. MYSQL_RESULT instances are good for only a single query, since they are a result set.
So for Zope, your question is irrelevant. In general, however, the answer is: It's safe to keep cursors around for a long time.
Well, I'm using MySQLdb in Zope rather ZMySQLDA, for some good reasons...
Anyway, it's good to know that. I was asking 'cos I'm trying o figure out why MySQL keeps hanging on my Win2K box after I've indexed about 10,000K docs. the number of docs indexed seems to depend on the table_cache variable, so if you know of anything in MySQLdb that could cause this, I'd be happy to hear about it :-)
The fruits of all this will hopefully appear at:
http://sourceforge.net/projects/pythonindexer/
...at some stage.
cheers,
Chris
Other than the fact that Cursor objects contain every row of the result set until you delete them or issue a new query, there's no reason not to keep them around forever. SSCursors do not have this problem, but have an entirely different problem: All rows of the result set must be read before another query can be issued on the connection. Failure to observe this rule will cause bad things to happen. Both of these are consequences of MySQL's design.
What's an SSCursor?
And with normal cursors, will _any_ query dump the previosu results set?
cheers,
Chris
SSCursor is a Cursor which uses mysql_use_result() to get the result set. In this case, rows are fetched from the server on demand, rather than all at once, as with mysql_store_result(), used by the standard Cursor.
And yes, with a normal cursor, each execute() throws away the last result set. However, since the entire result set is immediately stored within the Cursor as a tuple of tuples, you can perform fetchall() immediately after execute() and this will return the actual tuple of tuples, not a slice of it. fetchall() looks like this:
def fetchall(self):
"""Fetchs all available rows from the cursor."""
self._check_executed()
result = self.rownumber and self._rows[self.rownumber:] or self._rows
self.rownumber = len(self._rows)
return result
It does return a slice if any rows have been read with fetchone() or fetchmany(). Obviously if no rows have been previously read, fetchall() is very fast, but of course the heavy lifting has already been done by mysql_store_result() (called from within _mysql.query()).