db=_mysql.connect(host="127.0.0.1",user="python", passwd="snake",db="python")
qString = "SELECT coordinates FROM temp_objects WHERE name=1;"
db.query(qString)
r = use_result() # store_result() don't help much, in fact not at all.
db.close()
print r.fetch_row()
sys.exit()
Now some of you might have seen the problem already, but others might not have.
It's sort of obvious once you take a closer look and not just a glance at the code.
I basically just wrote like a devil on speed to get the code working while answering phone calls all day at work.
And what i didn't realize was the mistake i just made writing down that code.
now the 4 first rows are basically standard. (except for the 4:th that should be db.store_result() instead).
But the 5:th one is kinda sneaky.
See her's the actual problem:
db is a handle for the database connection, so far so good.
r, i thought was a result returned in arrays was not. it's a handle aswell.
There for if you do db.close() before you fetched the rows with r.fetch_row() you'll end up with a error like this:
_mysql_exceptions.InterfaceError: (0, '')
Now if you fetch say 100k sql rows it will take some time.
You have allready queryed the database for them, now you just have to loop through them to extract the values.
And the database connection will be open during these 100k rows of fetching and looping since you can't close the socket.
This will take it's toll on the server since you need to keep a connection alive until youre basicly done with the whole application loop.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The following program gave the output given at the end of this posting when run in IDLE.
But runs in the interpreter line by line.
Not able to execute as a script from DOS.
Any explanation please.
Chandhibala
import MySQLdb
import sys
include <Python.h>
include "structmember.h"
try:
con = MySQLdb.connect(host="localhost",port = 3306,user="root",passwd="",db="world")
except:
print "connection failed"
finally:
if (con != None):
con.close()
print "connected successfully"
cursor = con.cursor()
sql = "select * from city where population > 5000000"
cursor.execute(sql)
resultset = cursor.fetchall()
for x in resultset:
print x
con.close()
output:
connected successfully
Traceback (most recent call last):
File "C:\Documents and Settings\brajendran\pythonlessons\dbConnect.py", line 18, in <module>
cursor.execute(sql)
File "C:\Python25\Lib\site-packages\MySQLdb\cursors.py", line 147, in execute
charset = db.character_set_name()
InterfaceError: (0, '')
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
For starters, in your code, if the connection is successfully opened... you immediately close the connection, and then try to do stuff with it. Why would you do this?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm no expert on this, so the following could be completely wrong! :)
I thought only the SQL would be executed at your line 4, and no data would be transferred from the server to the client (except possibly the number of rows in the result). Only when you call fetch_row() at line 5 would the actual record data get transferred to the client. This means if you decide you don't actually want the rest of the data (eg. if it turns out to be several gigabytes!) you can close the connection without transferring the data across the network.
If you want to close the connection you could just call fetchall() and store the result in a list. Then close the connection and after that you can start processing the data.
My 2 cents. But maybe I'm missing something?
Brian
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You are pretty much on target, Brian. Except, if you decide you don't actually want the rest of the data, you still need to fetch it. Technically, yes, you could close the connection, though this might raise an exception. But the MySQL API requires fetching all rows. If you really had that much data leftover, this means you need a better query that returns less data.
Also, the original poster is using the low-level C-style interface and not the Python-style DB-API interface.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
db=_mysql.connect(host="127.0.0.1",user="python", passwd="snake",db="python")
qString = "SELECT coordinates FROM temp_objects WHERE name=1;"
db.query(qString)
r = use_result() # store_result() don't help much, in fact not at all.
db.close()
print r.fetch_row()
sys.exit()
Now some of you might have seen the problem already, but others might not have.
It's sort of obvious once you take a closer look and not just a glance at the code.
I basically just wrote like a devil on speed to get the code working while answering phone calls all day at work.
And what i didn't realize was the mistake i just made writing down that code.
now the 4 first rows are basically standard. (except for the 4:th that should be db.store_result() instead).
But the 5:th one is kinda sneaky.
See her's the actual problem:
db is a handle for the database connection, so far so good.
r, i thought was a result returned in arrays was not. it's a handle aswell.
There for if you do db.close() before you fetched the rows with r.fetch_row() you'll end up with a error like this:
_mysql_exceptions.InterfaceError: (0, '')
Now if you fetch say 100k sql rows it will take some time.
You have allready queryed the database for them, now you just have to loop through them to extract the values.
And the database connection will be open during these 100k rows of fetching and looping since you can't close the socket.
This will take it's toll on the server since you need to keep a connection alive until youre basicly done with the whole application loop.
python25
The following program gave the output given at the end of this posting when run in IDLE.
But runs in the interpreter line by line.
Not able to execute as a script from DOS.
Any explanation please.
Chandhibala
import MySQLdb
import sys
include <Python.h>
include "structmember.h"
try:
con = MySQLdb.connect(host="localhost",port = 3306,user="root",passwd="",db="world")
except:
print "connection failed"
finally:
if (con != None):
con.close()
print "connected successfully"
cursor = con.cursor()
sql = "select * from city where population > 5000000"
cursor.execute(sql)
resultset = cursor.fetchall()
for x in resultset:
print x
con.close()
output:
connected successfully
Traceback (most recent call last):
File "C:\Documents and Settings\brajendran\pythonlessons\dbConnect.py", line 18, in <module>
cursor.execute(sql)
File "C:\Python25\Lib\site-packages\MySQLdb\cursors.py", line 147, in execute
charset = db.character_set_name()
InterfaceError: (0, '')
For starters, in your code, if the connection is successfully opened... you immediately close the connection, and then try to do stuff with it. Why would you do this?
yeah, silly mistake. Thanks so much for the pain you have taken to reply.
B.Rajendran
I'm no expert on this, so the following could be completely wrong! :)
I thought only the SQL would be executed at your line 4, and no data would be transferred from the server to the client (except possibly the number of rows in the result). Only when you call fetch_row() at line 5 would the actual record data get transferred to the client. This means if you decide you don't actually want the rest of the data (eg. if it turns out to be several gigabytes!) you can close the connection without transferring the data across the network.
If you want to close the connection you could just call fetchall() and store the result in a list. Then close the connection and after that you can start processing the data.
My 2 cents. But maybe I'm missing something?
Brian
You are pretty much on target, Brian. Except, if you decide you don't actually want the rest of the data, you still need to fetch it. Technically, yes, you could close the connection, though this might raise an exception. But the MySQL API requires fetching all rows. If you really had that much data leftover, this means you need a better query that returns less data.
Also, the original poster is using the low-level C-style interface and not the Python-style DB-API interface.