You can subscribe to this list here.
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(9) |
Nov
(18) |
Dec
(5) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2002 |
Jan
(11) |
Feb
(7) |
Mar
(7) |
Apr
(7) |
May
(17) |
Jun
(33) |
Jul
(19) |
Aug
(3) |
Sep
(19) |
Oct
(18) |
Nov
(27) |
Dec
(13) |
| 2003 |
Jan
(12) |
Feb
(8) |
Mar
(11) |
Apr
(42) |
May
(19) |
Jun
(49) |
Jul
(23) |
Aug
(12) |
Sep
(2) |
Oct
(8) |
Nov
(8) |
Dec
(13) |
| 2004 |
Jan
(13) |
Feb
(2) |
Mar
(7) |
Apr
(7) |
May
(6) |
Jun
(13) |
Jul
(4) |
Aug
(12) |
Sep
(6) |
Oct
(6) |
Nov
(50) |
Dec
(10) |
| 2005 |
Jan
(20) |
Feb
(20) |
Mar
(2) |
Apr
(10) |
May
(12) |
Jun
(7) |
Jul
|
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2006 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(6) |
May
(3) |
Jun
(17) |
Jul
(2) |
Aug
(8) |
Sep
(1) |
Oct
(5) |
Nov
(3) |
Dec
(2) |
| 2007 |
Jan
(2) |
Feb
|
Mar
|
Apr
(2) |
May
(1) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
| 2009 |
Jan
(2) |
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
| 2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Stuart B. <ze...@sh...> - 2002-01-21 05:32:14
|
On Monday, January 21, 2002, at 04:24 AM, Lars Kellogg-Stedman wrote: > Hello all, > > I've just started using pyPgSQL, and I've run into a frustrating > problem -- how do I determine the size of a result set *before* calling > one of the fetchxxx() functions? You don't. Most database systems behave this way as there is no way of telling how many rows will be returned without performing the query. As the query may take several hours, or even days, to actually complete this row count is only available after all rows have been fetched. > The rowcount attribute doesn't appear to be terribly useful. As far as > I can tell, it simply tells me the number of rows returned by the > previous fetchxxx() call -- a value which I can also get just by > calling len() on the return from fetchxxx(). Before a fetch call, > rowcount always contains -1. > > Is there any other way of getting at this information? If you *need* the rowcount, you can retrieve it by issuing a 'select count(blah) from ...' query first. -- Stuart Bishop <ze...@sh...> http://shangri-la.dropbear.id.au/ |
|
From: Lars Kellogg-S. <la...@la...> - 2002-01-20 17:24:09
|
Hello all, I've just started using pyPgSQL, and I've run into a frustrating problem -- how do I determine the size of a result set *before* calling one of the fetchxxx() functions? The rowcount attribute doesn't appear to be terribly useful. As far as I can tell, it simply tells me the number of rows returned by the previous fetchxxx() call -- a value which I can also get just by calling len() on the return from fetchxxx(). Before a fetch call, rowcount always contains -1. Is there any other way of getting at this information? Thanks, -- Lars |
|
From: Gerhard <ger...@gm...> - 2002-01-17 23:16:46
|
Le 17/01/02 à 17:21, Tom Jenkins écrivit:
> hello all,
> we're migrating away from pygresql and i'm evaluating pypgsql along with
> psycopg. i found something troubling with both adapters. when we
> execute a select of this form:
> select count(myfield) from mytable where mykey = myvalue
> and there aren't any rows that match then we get a TypeError exception
> (psycopg returns ((None,)) which is almost as bad <wink>). Similarly i
> was getting the same exception on sums. Now i expected and got in
> pygresql a return of ((0,)).
>
> is anyone else seeing this or i am missing something here
I tried to reproduce what you're describing:
Script started on Fri Jan 18 00:00:14 2002
gerhard@lilith:~$ psql -h gargamel gerhard
Welcome to psql, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
gerhard=# create table test(id serial, name varchar(20));
NOTICE: CREATE TABLE will create implicit sequence 'test_id_seq' for SERIAL column 'test.id'
NOTICE: CREATE TABLE/UNIQUE will create implicit index 'test_id_key' for table 'test'
CREATE
gerhard=# insert into test(name) values ('bla');
INSERT 18794 1
gerhard=# select * from test;
id | name
----+------
1 | bla
(1 row)
gerhard=# select count(id), sum(id) from test where id=3245;
count | sum
-------+-----
0 |
(1 row)
gerhard=# \q
gerhard@lilith:~$ python2.1
Python 2.1.1+ (#1, Jan 8 2002, 00:37:12)
[GCC 2.95.4 20011006 (Debian prerelease)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> from pyPgSQL import PgSQL
>>> import psycopg
>>> db = PgSQL.connect(host="gargamel")
>>> cursor=db.cursor()
>>> cursor.execute("select count(id) from test where id=456")
>>> print cursor.fetchall()
[[0]]
>>> db = psycopg.connect("host=gargamel")
>>> cursor = db.cursor()
>>> cursor.execute("select count(id) from test where id=456")
>>> print cursor.fetchall()
[(0,)]
>>>
gerhard@lilith:~$ exit
Script done on Fri Jan 18 00:05:06 2002
I was using PostgreSQL 7.1.3 and the latest released versions of pyPgSQL
and psycopg, but couldn't reproduce the problem you were describing.
Could you try to produce a testcase (SQL schema and Python code) and
upload it to the Sourceforge bug tracker or post it here?
Or perhaps I didn't understand your question correctly!?
Gerhard
--
mail: gerhard <at> bigfoot <dot> de registered Linux user #64239
web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0
public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0
reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b')))
|
|
From: Tom J. <tje...@de...> - 2002-01-17 22:22:56
|
hello all, we're migrating away from pygresql and i'm evaluating pypgsql along with psycopg. i found something troubling with both adapters. when we execute a select of this form: select count(myfield) from mytable where mykey = myvalue and there aren't any rows that match then we get a TypeError exception (psycopg returns ((None,)) which is almost as bad <wink>). Similarly i was getting the same exception on sums. Now i expected and got in pygresql a return of ((0,)). is anyone else seeing this or i am missing something here -- Tom Jenkins Development InfoStructure http://www.devis.com |
|
From: Bernhard H. <bh...@in...> - 2002-01-14 14:50:59
|
"Billy G. Allie" <Bil...@mu...> writes:
> Bernhard Herzog wrote:
> > Is it possible to use pyPgSQL for asynchronous connections? It seems to
> > be the only Python/PostgreSQL module that actually wraps the relevant
> > libpg functions, but the Python API doesn't provide access to them. Is
> > anybody actually using them?
>
> The pyPgSQL.PgSQL module does not use asynchronous connections. If you need
> to use ansynchronous connections you will have to use pyPgSQL.libpq module,
> which exposes the appropiate API.
That's what I ended up doing. I also modified the copy a bit to
reactivate the PQconnectStart bindings for asynchronouse connections and
removed the version specific stuff because that wouldn't work with
PQconnectStart and we won't need the compatibility code.
> Unfortunatly, you will not have access to the DB-API 2.0 constructs.
That's not much of a problem. The project I need it for is pretty much
tied to PostgreSQL anyway because we're using the PostGIS extensions.
Thanks for your reply,
Bernhard
--
Intevation GmbH http://intevation.de/
Sketch http://sketch.sourceforge.net/
MapIt! http://mapit.de/
|
|
From: Billy G. A. <Bil...@mu...> - 2002-01-02 20:36:35
|
Bernhard Herzog wrote: > Is it possible to use pyPgSQL for asynchronous connections? It seems to > be the only Python/PostgreSQL module that actually wraps the relevant > libpg functions, but the Python API doesn't provide access to them. Is > anybody actually using them? The pyPgSQL.PgSQL module does not use asynchronous connections. If you need to use ansynchronous connections you will have to use pyPgSQL.libpq module, which exposes the appropiate API. Unfortunatly, you will not have access to the DB-API 2.0 constructs. I do not have plans at this time to use asynchronous connections in pyPgSQL.PgSQL. If there is enough interest in adding this to pyPgSQL.PgSQL, I will consider it (or will consider patches to add it if someone develops them :-). ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
|
From: Bernhard H. <bh...@in...> - 2002-01-02 16:03:50
|
Is it possible to use pyPgSQL for asynchronous connections? It seems to be the only Python/PostgreSQL module that actually wraps the relevant libpg functions, but the Python API doesn't provide access to them. Is anybody actually using them? Bernhard -- Intevation GmbH http://intevation.de/ Sketch http://sketch.sourceforge.net/ MapIt! http://mapit.de/ |
|
From: Edmund L. <el...@in...> - 2001-12-24 02:44:32
|
Gerhard, >>In testing and unstable only. Currently packaged version is 2.0 (the latest one). pyPgSQL perhaps didn't even exist when the last stable version (2.2) was released (March 2000, IIRC).<< I know why I didn't find pyPgSQL when I looked for it... I searched for pypgsql in dselect. It never occurred to me that python-pgsql was it, although I should have known better... ...Edmund. |
|
From: Edmund L. <el...@in...> - 2001-12-24 00:41:21
|
>>In testing and unstable only.<< Great! I didn't see it in Testing--must be going blind. >>... who is happily using Debian sid, FreeBSD and Windows XP<< Well, two out of three (Debian, FreeBSD) still makes you a man of impeccable taste! :-) ...Edmund. |
|
From: Gerhard <ger...@gm...> - 2001-12-24 00:10:22
|
Le 23/12/01 à 17:18, Edmund Lian écrivit: > Is there a .deb version of pypgsql anywhere? http://packages.debian.org/cgi-bin/search_packages.pl?keywords=python-pgsql&release=all (The URL must be in one line) In testing and unstable only. Currently packaged version is 2.0 (the latest one). pyPgSQL perhaps didn't even exist when the last stable version (2.2) was released (March 2000, IIRC). Gerhard ... who is happily using Debian sid, FreeBSD and Windows XP :-) -- mail: gerhard <at> bigfoot <dot> de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0 public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) |
|
From: Edmund L. <el...@in...> - 2001-12-23 22:16:44
|
Is there a .deb version of pypgsql anywhere? ...Edmund. |
|
From: Gerhard <ger...@gm...> - 2001-12-23 16:20:42
|
Now is the time everybody waits for his favourite extension modules to release a Python 2.2 version. Here's one for pyPgSQL: http://www.cs.fhm.edu/~ifw00065/downloads/pyPgSQL-2.0.win32-py2.2.exe http://www.cs.fhm.edu/~ifw00065/downloads/pyPgSQL-2.0.win32-py2.2.exe.md5 MD5 sum: ce23e87d32c229f0d5574a0687fb539f *pyPgSQL-2.0.win32-py2.2.exe Now I'm waiting for 2.2 versions of Numeric, wxPython, ... Billy, it would be nice if you could put this on the Sourceforge download page. And merry X-mas :-) Gerhard -- mail: gerhard <at> bigfoot <dot> de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0 public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) |
|
From: Billy G. A. <Bil...@mu...> - 2001-11-28 23:21:00
|
Adam Buraczewski wrote:
> On Mon, Nov 26, 2001 at 02:24:13PM +0100, Gerhard H=E4ring wrote:
>>> Then the PgSQL module should create a new Connection object, make a
>>> connection to the database, and send:
>>>
>>> SET CLIENT_ENCODING TO 'LATIN2';
>>
>> I think it's a better idea that the connect method gets an optional
>> parameter client_encoding (used if and only if conversions to/from
>> Unicode are done), but the user has to issue a
>> "SET CLIENT_ENCODING TO 'whatever'" manually, too.
>
> OK, it looks good for me. As a programmer I don't like when a library
> wrapper works behind the scenes and sends commands to a database
> backend on its own. However, it would be nice if PgSQL could send
> automatically some commands to PostgreSQL backend on every session or
> transaction start. Lately I have convinced Billy (at least I hope so
> ;) ) to introduce a transaction isolation level support, which is
> still absent from other Python interfaces for PostgreSQL (pyPgSQL is
> the first, as I know).
You have. I have it implemented on my machine, but I've been swamped
with work related issues, leaving me little time for the fun stuff
(pyPgSQL, etc.). I will put the patch up on Friday for the transaction
level related changes. I am also going to propose that transaction level
support be added to the next DB-API specification.
> I thought a bit about all that and a general
> solution came to my mind: two lists (or, even better: dictionaires) of
> strings. One of them should be sent to PostgreSQL backend on session
> start, the other just after every "BEGIN" command. It would be then
> possible to write something like this (an example, of course):
>
> conn =3D PgSQL.connect(database =3D "dbname,
> client_encoding =3D 'iso8859-2',
> on_session_start =3D ["SET CLIENT_ENCODING TO 'LAT
> I=
> N2';"],
> on_transaction_start =3D ["SET TRANSACTION ISOLATI
> O=
> N LEVEL SERIALIZABLE;"])
> .
> .
> .
> conn.on_transaction_start.append("some SQL commands");
>
> I agree that this idea could be not very bright ;) Especially
> isolation levels probably should be treated separately, due to their
> special meaning. However, such functionality should ease providing
> future enhancements which will unlikely be demanded by a growing
> community of pyPgSQL users :))
>
> What do You think about this?
I am leary of straying to far from the DB-API specification in the PgSQL
module (now the libpq module is a horse of a different color - it makes no
claim of DB-API compatiblity).
> I'd like to add here that for me, DBI 2.0's cursors should be used
> only for typical DQL statements, like SELECT, INSERT, UPDATE and
> DELETE. Other SQL commands (especially those which CREATE or DROP
> something, ALTER a database structure, or SET some parameters)
> shouldn't be used this way (since they usually cannot be issued during
> a transaction, for instance), but DBI specification does not provide
> any good solution for this. I think that this all is because programs
> which make use of DBI-compatible libraries should be portable (to
> other DBMSes), and that a good, transaction-safe method of sending
> these commands to PostgreSQL should be proposed here.
Actually, with the newer versions of PostgreSQL, things that could not be
in a transaction are now transaction-safe. For example, in version 7.1,
you can drop tables/indices within a transaction, but you couldn't in
previous versions. Also, you can use another connection with autocommit on
to do the CREATEs, DROPs and ALTERs.
___________________________________________________________________________
____ | Billy G. Allie | Domain....: Bil...@mu...
| /| | 7436 Hartwell | MSN.......: B_G...@em...
|-/-|----- | Dearborn, MI 48126|
|/ |LLIE | (313) 582-1540 |
|
|
From: Adam B. <ad...@po...> - 2001-11-28 20:24:50
|
On Mon, Nov 26, 2001 at 02:24:13PM +0100, Gerhard H=E4ring wrote:
> > Then the PgSQL module should create a new Connection object, make a
> > connection to the database, and send:
> >=20
> > SET CLIENT_ENCODING TO 'LATIN2';
>=20
> I think it's a better idea that the connect method gets an optional par=
ameter
> client_encoding (used if and only if conversions to/from Unicode are do=
ne), but
> the user has to issue a "SET CLIENT_ENCODING TO 'whatever'" manually, t=
oo.
OK, it looks good for me. As a programmer I don't like when a library
wrapper works behind the scenes and sends commands to a database
backend on its own. However, it would be nice if PgSQL could send
automatically some commands to PostgreSQL backend on every session or
transaction start. Lately I have convinced Billy (at least I hope so
;) ) to introduce a transaction isolation level support, which is
still absent from other Python interfaces for PostgreSQL (pyPgSQL is
the first, as I know). I thought a bit about all that and a general
solution came to my mind: two lists (or, even better: dictionaires) of
strings. One of them should be sent to PostgreSQL backend on session
start, the other just after every "BEGIN" command. It would be then
possible to write something like this (an example, of course):
conn =3D PgSQL.connect(database =3D "dbname,
client_encoding =3D 'iso8859-2',
on_session_start =3D ["SET CLIENT_ENCODING TO 'LATI=
N2';"],
on_transaction_start =3D ["SET TRANSACTION ISOLATIO=
N LEVEL SERIALIZABLE;"])
.
.
.
conn.on_transaction_start.append("some SQL commands");
I agree that this idea could be not very bright ;) Especially
isolation levels probably should be treated separately, due to their
special meaning. However, such functionality should ease providing
future enhancements which will unlikely be demanded by a growing
community of pyPgSQL users :))
What do You think about this?
I'd like to add here that for me, DBI 2.0's cursors should be used
only for typical DQL statements, like SELECT, INSERT, UPDATE and
DELETE. Other SQL commands (especially those which CREATE or DROP
something, ALTER a database structure, or SET some parameters)
shouldn't be used this way (since they usually cannot be issued during
a transaction, for instance), but DBI specification does not provide
any good solution for this. I think that this all is because programs
which make use of DBI-compatible libraries should be portable (to
other DBMSes), and that a good, transaction-safe method of sending
these commands to PostgreSQL should be proposed here.
Regards,
--=20
Adam Buraczewski <ad...@po...> * Linux registered user #165585
GCS/TW d- s-:+>+:- a- C+++(++++) UL++++$ P++ L++++ E++ W+ N++ o? K? w--
O M- V- PS+ !PE Y PGP+ t+ 5 X+ R tv- b+ DI? D G++ e+++>++++ h r+>++ y?
|
|
From: Gerhard <ger...@gm...> - 2001-11-26 13:24:25
|
Adam, thanks for letting me know your thoughs about this. I hope it's okay to CC the list. On Fri, Nov 23, 2001 at 02:12:11AM +0100, Adam Buraczewski wrote: > Hallo, > > I'm also interested in good working of pyPgSQL with various string > encodings. I mainly use ISO 8859-2 at server side and Win CP 1250 or > UTF-8 at client side. > > On Thu, Nov 22, 2001 at 06:46:03AM +0100, Gerhard Häring wrote: > > - Changed the PgSQL module to accept also UnicodeType where it accepts > > StringType > > It sounds great for me :) > > > - Before sending the query string to the libpq module, check if the query > > string is of type Unicode, if so, encode it via UTF-8 to a StringType and > > send this one instead > > Well, it should be rather converted into current database client > encoding IMHO. You shouldn't assume that when someone uses Python > unicode strings, he/she wants also to use UNICODE at server side. The > reason is that PostgreSQL still does not handle Unicode/UTF-8 > completely (for example, there are problems with Polish diacritical > characters which are absent when only 8-bit encoding is used at server > side). My implementation now converts from/to Unicode using the currently selected client_encoding (see below). > > - in pgconnection.c, added a read-write attribute clientencoding to the > > PgConnection_Type > > I cannot agree with changing anything in pyPgSQL.libpq. [...] All I did was expose the functions from PostgreSQL's libpq for changing and querying the current client_encoding. Now I've dropped all this because it's not necessary and causing problems (see below). > However, such functionality should be obviously added to pyPgSQL.PgSQL > module. It would be nice to write something like this (an example): > > conn = PgSQL.connect(database = 'dbname', > client_encoding = 'iso8859-2', > unicode_results = 0) I like this proposal very much. Partly because it's almost what I had in mind anyway :) > Then the PgSQL module should create a new Connection object, make a > connection to the database, and send: > > SET CLIENT_ENCODING TO 'LATIN2'; I've started implementing this, but when I had almost finished it I threw it all away. The reason is that this would become a maintenance nightmare later on. I'd have to know about all possible names of an encoding at Python-side, normalize them (using an ugly try-catch and encodings.aliases) and keep a dictionary to map the Python encoding name to the PostgreSQL encoding name. This dictionary would have to be updated once new PostgreSQL encodings become available. I think it's a better idea that the connect method gets an optional parameter client_encoding (used if and only if conversions to/from Unicode are done), but the user has to issue a "SET CLIENT_ENCODING TO 'whatever'" manually, too. I've changed the connect method (and the Connection constructor) like this: - add a new paramter client_encoding. If client_encoding is None, it defaults to sys.getdefaultencoding(), if it is a string, self.client_encoding is set to (client_encoding, ) else it's left unchanged. The tuple sys.client_encoding is expanded to the parameters of the string encode function and the second and third parameters of the unicode() function when doing charset conversion. - add a unicode_results parameter. If true, the typecast() method in TypeCache changes strings to Unicode strings using the client_encoding of the connection object > to the PostgreSQL backend. Later, instructions like: > > c = conn.cursor() > c.execute(u'select sth from tab where field = %s;', u'aaaa') > > should change both Unicode strings to ISO 8859-2, perform argument > substitution, and send a query to backend. Results should be left > without change (encoded in client_encoding), unless "unicode_results > == 1", when all strings should be converted back to Unicode strings. > > Please remember also that it is possible that someone uses PostgreSQL > without unicode and conversion-on-the-fly facilities. In such > circumstances "client_encoding" and "unicode_results" variables should > not be set to anything, and PgSQL should not recode any strings (using > Unicode strings should be illegal) neither send "SET CLIENT_ENCODING" > commands to the backend. Hmm. As I said I'd rather not let pyPgSQL send SET CLIENT_ENCODING commands,b but for finding out wether libpq and/or the backend support Unicode or charset conversion, I think I'll need additional functions in libpq (if only for checking wether the MULTIBYTE macro is defined). > I attached a small Python program which checks how PgSQL works with > various client-backend encodings. I wrote it for Billy G. Allie some > time ago. Feel free to use and modify it, according to Your needs. Thanks, that will sure be useful for testing. Everything is far from finished but I'd like to hear what others (esp. Billy) think about the interface and wether my approach is right. Gerhard -- mail: gerhard <at> bigfoot <dot> de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0 public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) |
|
From: Billy G. A. <Bil...@mu...> - 2001-11-26 04:09:53
|
markus jais wrote:
> hello
>
> I have the following code:
>
> -----
> db = PgSQL.connect("::proglang", user="markus", password="markus")
> st = db.cursor()
> st.execute("select * from languages")
>
> print st.rowcount # prints -1
> res = st.fetchall()
> print st.rowcount # prints 3
>
> for i in res:
> print i
>
> st.close
> db.close
> ----
>
> why does the first rowcount print "-1"????
> shouldn't it print "3", as the second, which is the correct answer???
>
> the db has three rows:
>
> the complete output of the programm is:
>
> -1
> 3
> ['Perl', 'Larry Wall', 'www.perl.com', 'better than C']
> ['Python', 'Guido von Rossum', 'www.python.org', 'very cool']
> ['Ruby', 'Yukihiro Matsumoto', 'www.ruby-lang.org', 'great']
>
>
> is this a bug??
> or is there something wrong with my understanding of rowcount??
>
> markus
Markus,
That's no bug, that's a feature! :-)
By default, pyPgSQL uses PostgreSQL portals (i.e. cursors) when executing a
query. This causes the value of rowcount to be indeterminate until a fetchXXX
method call is executed. At that time, the rowcount will be set to the number
of rows returned by the fetchXXX method call. If you tell pyPgSQL not to use
portals, then rowcount will be the number of rows returned by the query.
BTW: This behaviour is explained in footnote 7 fo the DB-API 2.0 specification:
7. The rowcount attribute may be coded in a way that updates its value
dynamically. This can be useful for databases that return useable
rowcount values only after the first call to a .fetchXXX() method.
I hope this helps your understanding of rowcount in pyPgSQL.
___________________________________________________________________________
____ | Billy G. Allie | Domain....: Bil...@mu...
| /| | 7436 Hartwell | MSN.......: B_G...@em...
|-/-|----- | Dearborn, MI 48126|
|/ |LLIE | (313) 582-1540 |
|
|
From: markus j. <in...@mj...> - 2001-11-25 20:41:56
|
hello
I have the following code:
-----
db = PgSQL.connect("::proglang", user="markus", password="markus")
st = db.cursor()
st.execute("select * from languages")
print st.rowcount # prints -1
res = st.fetchall()
print st.rowcount # prints 3
for i in res:
print i
st.close
db.close
----
why does the first rowcount print "-1"????
shouldn't it print "3", as the second, which is the correct answer???
the db has three rows:
the complete output of the programm is:
-1
3
['Perl', 'Larry Wall', 'www.perl.com', 'better than C']
['Python', 'Guido von Rossum', 'www.python.org', 'very cool']
['Ruby', 'Yukihiro Matsumoto', 'www.ruby-lang.org', 'great']
is this a bug??
or is there something wrong with my understanding of rowcount??
markus
--
Markus Jais
http://www.mjais.de
in...@mj...
The road goes ever on and on - Bilbo Baggins
|
|
From: Adam B. <ad...@po...> - 2001-11-23 01:34:35
|
Hallo,
I'm also interested in good working of pyPgSQL with various string
encodings. I mainly use ISO 8859-2 at server side and Win CP 1250 or
UTF-8 at client side.
On Thu, Nov 22, 2001 at 06:46:03AM +0100, Gerhard H=E4ring wrote:
> - Changed the PgSQL module to accept also UnicodeType where it accepts
> StringType
It sounds great for me :)
> - Before sending the query string to the libpq module, check if the que=
ry
> string is of type Unicode, if so, encode it via UTF-8 to a StringType=
and
> send this one instead
Well, it should be rather converted into current database client
encoding IMHO. You shouldn't assume that when someone uses Python
unicode strings, he/she wants also to use UNICODE at server side. The
reason is that PostgreSQL still does not handle Unicode/UTF-8
completely (for example, there are problems with Polish diacritical
characters which are absent when only 8-bit encoding is used at server
side).
> - in pgconnection.c, added a read-write attribute clientencoding to the
> PgConnection_Type
I cannot agree with changing anything in pyPgSQL.libpq. It is a
low-level module, which has the same functionality as PostgreSQL
native libpq library. It should only send data to the server and
allow to read results, nothing more. Especially it shouldn't change
character encodings implicitly.
At least changing the way libpq deals with strings, would break some
of my programs. ;((
However, such functionality should be obviously added to pyPgSQL.PgSQL
module. It would be nice to write something like this (an example):
conn =3D PgSQL.connect(database =3D 'dbname',=20
client_encoding =3D 'iso8859-2',
unicode_results =3D 0)
Then the PgSQL module should create a new Connection object, make a
connection to the database, and send:
SET CLIENT_ENCODING TO 'LATIN2';
to the PostgreSQL backend. Later, instructions like:
c =3D conn.cursor()
c.execute(u'select sth from tab where field =3D %s;', u'aaaa')
should change both Unicode strings to ISO 8859-2, perform argument
substitution, and send a query to backend. Results should be left
without change (encoded in client_encoding), unless "unicode_results
=3D=3D 1", when all strings should be converted back to Unicode strings.
Please remember also that it is possible that someone uses PostgreSQL
without unicode and conversion-on-the-fly facilities. In such
circumstances "client_encoding" and "unicode_results" variables should
not be set to anything, and PgSQL should not recode any strings (using
Unicode strings should be illegal) neither send "SET CLIENT_ENCODING"
commands to the backend.
I attached a small Python program which checks how PgSQL works with
various client-backend encodings. I wrote it for Billy G. Allie some
time ago. Feel free to use and modify it, according to Your needs.
Regards,
--=20
Adam Buraczewski <ad...@po...> * Linux registered user #165585
GCS/TW d- s-:+>+:- a- C+++(++++) UL++++$ P++ L++++ E++ W+ N++ o? K? w--
O M- V- PS+ !PE Y PGP+ t+ 5 X+ R tv- b+ DI? D G++ e+++>++++ h r+>++ y?
|
|
From: Gerhard <ger...@gm...> - 2001-11-22 05:46:13
|
Ok, maybe I'll just describe what I've done so far (locally).
- Changed the PgSQL module to accept also UnicodeType where it accepts
StringType
- Before sending the query string to the libpq module, check if the query
string is of type Unicode, if so, encode it via UTF-8 to a StringType and
send this one instead
- in pgconnection.c, added a read-write attribute clientencoding to the
PgConnection_Type
All of this works pretty well so far, for example the following works as
expected (never mind if you see weird chars, it's 'Internet' in Russian KOI-8
encoding):
#!/usr/bin/env python
from pyPgSQL import PgSQL
con = PgSQL.connect(database="testu")
cursor = con.cursor()
name = unicode("éÎÔÅÒÎÅÔ", "koi8-r") # 'Internet' in Russian
cursor.execute("insert into gh (name) values ('%s')" % name)
print con.conn.clientencoding # 'UNICODE'
con.conn.clientencoding = 'KOI8'
print con.conn.clientencoding # 'KOI-8'
cursor.execute("select * from gh")
print cursor.fetchone()[0] # works, is automatically converted
For languages that cannot be encoded in 8 bits, I fear it will get more
complicated. So I propose the following:
- Strings sent to the backend: Unicode is encoded as UTF-8. StringType is sent
as-is like before (with escaping as needed). If people set the
clientencoding, PostgreSQL will even do the charset conversion (to Unicode or
whatever) for them.
- Strings retrieved from the backend: If the client-encoding is UNICODE,
strings are always retrieved as UnicodeType. This is a major change, but it's
IMO necessary to make using east-asian languages possible at all. If people
want to receive StringType but the data can possibly be Unicode, they have to
set the client-encoding accordingly. For German, I'd have to set
clientencoding to 'LATIN1', for example.
- If the PostgreSQL client-encoding is any of the special non-Unicode ones like
SJIS, BIG5 or whatever, major reality failure happens ;-) I have no idea
about these encodings, and neither has Python.
Gerhard
--
mail: gerhard <at> bigfoot <dot> de registered Linux user #64239
web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0
public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0
reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b')))
|
|
From: Gerhard <ger...@gm...> - 2001-11-22 05:07:57
|
I'll try to summarize my findings on the world beyond US-ASCII in PostgreSQL
and Python:
Python
======
Python has two string types: StringType and UnicodeType. Unicode is the simpler
case (really), because in Unicode every character has a defined meaning. It's
meaning doesn't depend on the charset used.
StringType is ok as long as you only use US-ASCII (chars <= 127). But if you
use 8-bit characters, the meaning of the characters depend on the current
charset. This is important, if you want to convert between StringType and
UnicodeType, for example. For conversion, you must know in which charset the
StringType is encoded. There's only one way to set the default charset in
Python and it's awkward (and the designers wanted it like this): You must set
it in a sitecustomize.py that must be somewhere in the PYTHONPATH.
If you don't set your defaultencoding explicitly, it defaults to 'ascii':
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
Ok, now what happens if I put the following sitecustomize.py in my PYTHONPATH:
import sys
sys.setdefaultencoding('iso-8859-1')
>>> import sys
>>> sys.getdefaultencoding()
'iso-8859-1'
sys.defaultencoding is the encoding that is used when you don't supply an
encoding explicitly when converting between UnicodeType and StringType.
PostgreSQL
==========
(My PostgreSQL is built with all i18n features on: --enable-unicode-conversion
--enable-recode --enable-multibyte --enable-locale)
Here's what little info there is from the PostgreSQL docs:
http://www.postgresql.org/idocs/index.php?multibyte.html
PostgreSQL can set an encoding for the database, and it can have a
client-encoding for the client library. Some combination of encodings can be
transparently converted by PostgreSQL.
It *looks* like (when client-encoding is UNICODE), PostgreSQL sends UTF-8 to
the backend, but I haven't found any description of this implementation. Well,
UTF-8 seems to be the normal way of sending Unicode around, nowadays.
We can create UTF-8 relatively easy from a Python Unicode string:
u"whatever".encode("utf-8")
So much for my analysis, I'll shortly write something about implementation.
Gerhard
--
mail: gerhard <at> bigfoot <dot> de registered Linux user #64239
web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0
public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0
reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b')))
|
|
From: Mark M. <mar...@mc...> - 2001-11-20 19:02:48
|
Gerhard, thank you for the workaround. That's very helpful.
I don't have a pressing need to store Unicode. How I came across the issue
is somewhat strange. I was manipulating xml with minidom and found that the
strings it squirts out with toxml() are Unicode strings. When I tried to
insert those strings into PostgreSQL via PgSQL--completely ignorant of
Unicode, internationalization issues--I got that strange error:
Unable to locate type name 'u' in catalog
The simple solution in our case was just to do:
unicodeString = foo.toxml()
safeString = unicodeString.encode('utf-8')
And then insert the safeString--and that worked. I confess, I know very
little about internationalization issues.
I appreciate your very timely and helpful replies.
Cheers,
// mark
|
|
From: Gerhard <ger...@gm...> - 2001-11-20 08:11:40
|
On Mon, Nov 19, 2001 at 07:26:49PM -0800, Mark McEahern wrote:
> Hi, I'm trying to insert unicode strings into PostgreSQL via PgSQL and I'm
> having a devil of a time with it. Here's what I tried:
>
> $ createdb -E UNICODE testu
> $ psql testu
> # \encoding
> UNICODE
> # create table person (firstname text)
> CREATE
Mark,
there might be an (ugly) workaround for the current version of pyPgSQL. A lot
of functionality is lost, but at least it seems to allow using Unicode
databases:
from pyPgSQL import PgSQL
# with CREATE TABLE GH(NAME VARCHAR(20));
db = PgSQL.connect(database="testu")
cursor = db.cursor()
name = unicode("Bärwurz", "latin-1")
stmnt = u"insert into gh (name) values ('%s')" % name
stmnt = stmnt.encode("utf-8")
cursor.execute(stmnt)
cursor.execute("select * from gh")
print unicode(cursor.fetchone()[0], "utf-8").encode("latin-1")
The reason for Unicode currently not working right is that almost all of the
pyPgSQL code excepts a StringType and won't work with UnicodeType parameters.
But I fear that's only part of the problem. *Really* fixing it will probably be
hard work. We'll also have to decide what the desired behaviour is in such
cases. I think the usage of Unicode will also have to depend on the setting of
the current client encoding, for example.
Btw. the only other db module I could find that does *handle* Unicode at all is
mxODBC.
Gerhard
--
mail: gerhard <at> bigfoot <dot> de registered Linux user #64239
web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0
public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0
reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b')))
|
|
From: Gerhard <ger...@gm...> - 2001-11-20 05:35:22
|
On Mon, Nov 19, 2001 at 07:26:49PM -0800, Mark McEahern wrote: > Hi, I'm trying to insert unicode strings into PostgreSQL via PgSQL and I'm > having a devil of a time with it. Here's what I tried: > > [...] > > Q: How come this doesn't just work? > > I wish I could ask a more intelligent question than that! It's a very intelligent question. Alas, the answer is as simple as that Unicode support isn't implemented, yet. At least not fully. I wasn't aware of this fact and I will work on it for the next release. Good, now I'll have to look how much effort this really is. Gerhard -- mail: gerhard <at> bigfoot <dot> de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id 86AB43C0 public key fingerprint: DEC1 1D02 5743 1159 CD20 A4B6 7B22 6575 86AB 43C0 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) |
|
From: Mark M. <ma...@mc...> - 2001-11-20 03:27:50
|
Hi, I'm trying to insert unicode strings into PostgreSQL via PgSQL and I'm
having a devil of a time with it. Here's what I tried:
$ createdb -E UNICODE testu
$ psql testu
# \encoding
UNICODE
# create table person (firstname text)
CREATE
So far so good.
So then I hop into Python and try:
>>> import PgSQL
>>> c = PgSQL.Connection("dbname=testu")
>>> cur = c.cursor()
>>> q = "insert into person (firstname) values (%s)"
>>> firstname = u'Mark'
>>> params = []
>>> params.append(firstname)
>>> cur.execute(q, params)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.1/site-packages/PgSQL.py", line 2111, in execute
raise OperationalError, msg
libpq.OperationalError: ERROR: Unable to locate type name 'u' in catalog
Q: How come this doesn't just work?
I wish I could ask a more intelligent question than that!
Thanks,
// mark
|
|
From: dubal <du...@kh...> - 2001-11-06 08:16:34
|
(We had sent this message yesterday. However, we are not sure that it went because our mail server had problems. This is a repeat post. Please ignore if you have already received.) Many thanks for your very prompt response. 1. As for the installation problem, you have advised us to ignore the gcc error message. The installation is done and the module is working. 2. We could not read datetime columns on RH6.2 because we had installed a DateTime module from somewhere I don't remember. It did not have the ISO module and that was the trouble. Today we installed mx rpm available for RH7.2 that also has the ISO module. Therefore the pypgsql is working fine. Thanks again. Have a nice day. Dubal On Monday 05 November 2001 01:05 pm, you wrote: > dubal wrote: > > Hello everyone, > > > > We have tried pygresql that is supplied with postgresql but we had > > problems with quoting, nulls and checking rowcount after update > > statement. > > > > We tried PoPy but we had problems with quoting and nulls. > > > > We tried this (pypgsql). We did not have any of the above problems. > > However, we could not read any date columns from db. > > That is strange. Can you post an example that illustrates the problem > (table definition, code use to access the dates, etc.). > > > Also while installing, python2 setup.py build aborts with gcc > > complaining: gcc: unrecognized option `-R/usr/lib/pgsql' > > The build doesn't abort, at least when I install it under cygwin using gcc. > The message is just a warning and the option is ignored. > ___________________________________________________________________________ > ____ | Billy G. Allie | Domain....: Bil...@mu... > > | /| | 7436 Hartwell | MSN.......: B_G...@em... > |-/-|----- | Dearborn, MI 48126| > |/ |LLIE | (313) 582-1540 | ------------------------------------------------------- |