You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(9) |
Nov
(18) |
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(11) |
Feb
(7) |
Mar
(7) |
Apr
(7) |
May
(17) |
Jun
(33) |
Jul
(19) |
Aug
(3) |
Sep
(19) |
Oct
(18) |
Nov
(27) |
Dec
(13) |
2003 |
Jan
(12) |
Feb
(8) |
Mar
(11) |
Apr
(42) |
May
(19) |
Jun
(49) |
Jul
(23) |
Aug
(12) |
Sep
(2) |
Oct
(8) |
Nov
(8) |
Dec
(13) |
2004 |
Jan
(13) |
Feb
(2) |
Mar
(7) |
Apr
(7) |
May
(6) |
Jun
(13) |
Jul
(4) |
Aug
(12) |
Sep
(6) |
Oct
(6) |
Nov
(50) |
Dec
(10) |
2005 |
Jan
(20) |
Feb
(20) |
Mar
(2) |
Apr
(10) |
May
(12) |
Jun
(7) |
Jul
|
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(6) |
May
(3) |
Jun
(17) |
Jul
(2) |
Aug
(8) |
Sep
(1) |
Oct
(5) |
Nov
(3) |
Dec
(2) |
2007 |
Jan
(2) |
Feb
|
Mar
|
Apr
(2) |
May
(1) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2008 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2009 |
Jan
(2) |
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Timothy S. <ti...@op...> - 2006-04-20 07:49:03
|
Terry Macdonald wrote: > Billy G. Allie wrote: >> On Sun, 2006-04-02 at 19:25 +0100, Terry Macdonald wrote: >>> Billy G. Allie wrote: >>> > On Mon, 2006-03-27 at 13:52 +0100, Terry Macdonald wrote: >>> >> Hi, >>> >> [ . . . ] >>> > >>> If I have understood you then If I have a single cursor on a >>> connection I would need to do a commit after every select so that a >>> subsequent insert or update would have a correct timestamp. >>> Or... >>> I could have two connections each with a cursor; one >>> connection/cursor pair would be used for non-select statements and >>> the other connection/cursor pair would be used for selects >>> >>> Yes? >>> >>> >> Either would work (providing that the onnection/cursor pair used for >> inserts and updates is commited frequently enough to make the >> timestamps reflect the current time. >> >> /Note that the restrictions of one active transaction per connection >> is a PostgreSQL restriction, not a DB-API 2.0 compilant module >> restriction./ > I will be using a hack for now whereby for a non-select query a commit > is performed before and after execution as the cursor is also used for > select queries so a transaction is in progress pretty much the whole > time. > > Has no one else experienced this issue before? I'm surprised it > doesn't show up in the mailing list more often. Or am I doing things > completely arse about face! > > Do people open a new connection for every request?! surely not > > Thanks for your help Billy. > > the general useage of cursors i've seen is that they are short lived things you create for the purpose of one transaction and then closed. there is no reason really to leave them open when you can just leave the db connection open and create a cursor as you need it. thats the explaination i've seen on the psycopg lists anyway, and thats how i use it and i use a similar transaction based application, in which timestamps play a big role. |
From: Terry M. <ter...@ds...> - 2006-04-04 09:25:08
|
Billy G. Allie wrote: > On Sun, 2006-04-02 at 19:25 +0100, Terry Macdonald wrote: >> Billy G. Allie wrote: >> > On Mon, 2006-03-27 at 13:52 +0100, Terry Macdonald wrote: >> >> Hi, >> > [ . . . ] >> > >> If I have understood you then If I have a single cursor on a connection >> I would need to do a commit after every select so that a subsequent >> insert or update would have a correct timestamp. >> Or... >> I could have two connections each with a cursor; one connection/cursor >> pair would be used for non-select statements and the other >> connection/cursor pair would be used for selects >> >> Yes? >> >> > Either would work (providing that the onnection/cursor pair used for > inserts and updates is commited frequently enough to make the > timestamps reflect the current time. > > /Note that the restrictions of one active transaction per connection > is a PostgreSQL restriction, not a DB-API 2.0 compilant module > restriction./ I will be using a hack for now whereby for a non-select query a commit is performed before and after execution as the cursor is also used for select queries so a transaction is in progress pretty much the whole time. Has no one else experienced this issue before? I'm surprised it doesn't show up in the mailing list more often. Or am I doing things completely arse about face! Do people open a new connection for every request?! surely not Thanks for your help Billy. |
From: Billy G. A. <bil...@de...> - 2006-04-03 03:21:50
|
On Sun, 2006-04-02 at 19:25 +0100, Terry Macdonald wrote: > Billy G. Allie wrote: > > On Mon, 2006-03-27 at 13:52 +0100, Terry Macdonald wrote: > >> Hi, [ . . . ] > > > If I have understood you then If I have a single cursor on a connection > I would need to do a commit after every select so that a subsequent > insert or update would have a correct timestamp. > Or... > I could have two connections each with a cursor; one connection/cursor > pair would be used for non-select statements and the other > connection/cursor pair would be used for selects > > Yes? > Either would work (providing that the connection/cursor pair used for inserts and updates is commited frequently enough to make the timestamps reflect the current time. Note that the restrictions of one active transaction per connection is a PostgreSQL restriction, not a DB-API 2.0 compilant module restriction. |
From: Terry M. <ter...@ds...> - 2006-04-02 18:26:03
|
Billy G. Allie wrote: > On Mon, 2006-03-27 at 13:52 +0100, Terry Macdonald wrote: >> Hi, >> >> I don't now if I am implementing the DB-API via pypgsql properly but I >> have a web app that opens a db connection and a couple of cursors at >> application startup and I use the same cursors throughout the life of >> the app. >> >> Doing it this way I have noticed that transactions begin once a commit >> had been done. If an SQL statement acts on a table with a timestamp >> column using the now() function, as it is in a transaction the time >> used in the update/insert is when the transaction was started which >> means that after my last commit if a significant time has elapsed before >> the cursor is used again to update/insert a row the time stored is that >> of when the transaction began and not when the row was inserted which >> could easily be long after the transaction began. >> >> What is the best practice in using cursors and how do I get the >> timestamp to reflect when the row was created. >> > There are a number of things to consider: > > 1. Transactions are at the Connection level. This implies that all > cursors that share a connection share the transaction. > 2. Transactions are started when the first cursor is opened on the > connection. Additional cursors created on the same connection > do no start another transaction (there is only one active > transaction per connection). If you need multiple transactions > at the same time, you will need multiple connections. > 3. After a Connection.commit() or .rollback(), another transaction > is not started until a new cursor is created or until an query > is executed on an existing cursor. /Remember - if there are > multiple cursors on the connection, the transaction is started > the first time .executeXXX or .callproc is executed on any one > of them./ > > Given these thing, I would recommend: > > 1. Using only one cursor per connection, unless you need to process > through the results of multiple queries simotaineously. > 2. If you need to perform updates based on the results of > processing the results of a query in progress, use two > connections with 1 cursor each. One for the query being > processed, the other for the updates. > 3. Commit the updates as soon as passible, preferrably after each > row if there is significant time between the updates. This will > delay the creation of a transaction until the next update > occurs, ensuring that the timestamps reflect when the row was > created. > 4. If creating a cursor ahead of time, issue a > connection.rollback() after creating all the cursors for that > connection. This will delay the creation of the transaction > until the first time a cursor for that connection is used. > If I have understood you then If I have a single cursor on a connection I would need to do a commit after every select so that a subsequent insert or update would have a correct timestamp. Or... I could have two connections each with a cursor; one connection/cursor pair would be used for non-select statements and the other connection/cursor pair would be used for selects Yes? |
From: Billy G. A. <bil...@de...> - 2006-04-01 20:26:57
|
On Mon, 2006-03-27 at 13:52 +0100, Terry Macdonald wrote: > Hi, > > I don't now if I am implementing the DB-API via pypgsql properly but I > have a web app that opens a db connection and a couple of cursors at > application startup and I use the same cursors throughout the life of > the app. > > Doing it this way I have noticed that transactions begin once a commit > had been done. If an SQL statement acts on a table with a timestamp > column using the now() function, as it is in a transaction the time > used in the update/insert is when the transaction was started which > means that after my last commit if a significant time has elapsed before > the cursor is used again to update/insert a row the time stored is that > of when the transaction began and not when the row was inserted which > could easily be long after the transaction began. > > What is the best practice in using cursors and how do I get the > timestamp to reflect when the row was created. There are a number of things to consider: 1. Transactions are at the Connection level. This implies that all cursors that share a connection share the transaction. 2. Transactions are started when the first cursor is opened on the connection. Additional cursors created on the same connection do no start another transaction (there is only one active transaction per connection). If you need multiple transactions at the same time, you will need multiple connections. 3. After a Connection.commit() or .rollback(), another transaction is not started until a new cursor is created or until an query is executed on an existing cursor. Remember - if there are multiple cursors on the connection, the transaction is started the first time .executeXXX or .callproc is executed on any one of them. Given these thing, I would recommend: 1. Using only one cursor per connection, unless you need to process through the results of multiple queries simotaineously. 2. If you need to perform updates based on the results of processing the results of a query in progress, use two connections with 1 cursor each. One for the query being processed, the other for the updates. 3. Commit the updates as soon as passible, preferrably after each row if there is significant time between the updates. This will delay the creation of a transaction until the next update occurs, ensuring that the timestamps reflect when the row was created. 4. If creating a cursor ahead of time, issue a connection.rollback () after creating all the cursors for that connection. This will delay the creation of the transaction until the first time a cursor for that connection is used. -- Billy G. Allie <bil...@de...> |
From: Greg F. <gr...@gr...> - 2006-03-29 13:59:18
|
Hi, in regards to my last mail I've found the problem. If I turn on verbose debugging in the postgres server then pypgsql barfs here: Line 866 in /usr/lib/python2.3/site-packages/pyPgSQL/PgSQL.py if len(self.__conn.notices) != _nl: raise Warning, self.__conn.notices.pop() It seems that the extra debug information has found it's way into the self.__conn.notices and is being treated as an error! Even if I catch this warning this will stop the function from completing so the library continues not to work. I can remove this problem for me by disabling the extra logging in /etc/postgresql/postgresql.conf log_statement = true will break it on a select for me. If i turn that off it'll break with debug info in notices at a later stage. It looks like all (or at least a lot of) the debug info is making it into notices and being considered an error. If this is normal and configurable to off then sorry for the post. I could find no documentation on this and it looks like a bug, or a confusing default behaviour at least. Greg |
From: Greg F. <gr...@gr...> - 2006-03-28 13:52:31
|
Hi, I've run into problems with a script of mine that used to work a few months back. I've apt-get dist-upgraded my debian sarge box since so I'd guess it's either a new postgresql or a new python-pgsql. I've tried using pypgsql 2.4.0-7 & 2.4.0-5 from debian, with postgresl versions v7.5.15 and 7.4.7-6sarge1. The script does an Insert into a table, then attempts to read entries (there should only be one) from that table for the key value i've just inserted. here's some code snippets. try: cur.execute("INSERT INTO xmltv.channels (chan_id) VALUES ( '%s' )" % channel["channel_id"] ) cOid = getChannelOid( channel["channel_id"] ) def getChannelOid( cOID): try: cur.execute( "SELECT chan_oid FROM xmltv.channels where chan_id='%s'" % cOID ) The library barfs on the SELECT statement, an error occurs and the library tries to rollback the transaction. The item it is trying to do a select on has definately been added to the table because if I do a db.commit() here and abort I can see it still in the database. Also if I enter the commands from the postgresql.log into psql manually it works and returns the correct result. Here's the python traceback of the failed select: File "./parse.py", line 63, in getChannelOid cur.execute( "SELECT chan_oid FROM xmltv.channels where chan_id=4" ) File "/usr/lib/python2.3/site-packages/pyPgSQL/PgSQL.py", line 3086, in execute self.__makedesc__() File "/usr/lib/python2.3/site-packages/pyPgSQL/PgSQL.py", line 2826, in __makedesc__ _tn, _pl, _ia, _bt = _cache.getTypeInfo(_typ) File "/usr/lib/python2.3/site-packages/pyPgSQL/PgSQL.py", line 886, in getTypeInfo raise Warning, self.__conn.notices.pop() libpq.Warning: LOG: statement: BEGIN WORK From the postgresql.log I can see (just before the rollback) STATEMENT: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 20 2006-03-28 14:45:42 [32097] DEBUG: StartTransactionCommand STATEMENT: ROLLBACK WORK 2006-03-28 14:45:42 [32097] LOG: statement: ROLLBACK WORK Is -1 a valid input into this SELECT? thanks a lot for any help you can provide, Greg |
From: Terry M. <ter...@ds...> - 2006-03-27 13:00:33
|
Hi, I don't now if I am implementing the DB-API via pypgsql properly but I have a web app that opens a db connection and a couple of cursors at application startup and I use the same cursors throughout the life of the app. Doing it this way I have noticed that transactions begin once a commit had been done. If an SQL statement acts on a table with a timestamp column using the now() function, as it is in a transaction the time used in the update/insert is when the transaction was started which means that after my last commit if a significant time has elapsed before the cursor is used again to update/insert a row the time stored is that of when the transaction began and not when the row was inserted which could easily be long after the transaction began. What is the best practice in using cursors and how do I get the timestamp to reflect when the row was created. Cheers in advance |
From: Terry M. <ter...@ds...> - 2006-03-27 12:52:43
|
Hi, I don't now if I am implementing the DB-API via pypgsql properly but I have a web app that opens a db connection and a couple of cursors at application startup and I use the same cursors throughout the life of the app. Doing it this way I have noticed that transactions begin once a commit had been done. If an SQL statement acts on a table with a timestamp column using the now() function, as it is in a transaction the time used in the update/insert is when the transaction was started which means that after my last commit if a significant time has elapsed before the cursor is used again to update/insert a row the time stored is that of when the transaction began and not when the row was inserted which could easily be long after the transaction began. What is the best practice in using cursors and how do I get the timestamp to reflect when the row was created. Cheers in advance |
From: <gh...@gh...> - 2006-03-06 07:11:47
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 mattia tortorelli wrote: > Hi all! > > > > I?m trying to install trac 0.9.3 on a Linux RedHat Enterprise 4 server, > I got Python 2.3 and 2.4 and I?ve installed pyPgSQL 2.3 and 2.4 > respectively. When I try to create a new environment with the trac-admin > command, this use pyPgSQL to connect to a PgSQL database but it gives me > the next error: [...] There is no pyPgSQL 2.4 released, yet. > /usr/lib/python2.4/site-packages/pyPgSQL/libpq/__init__.py:23: > RuntimeWarning: Python C API version mismatch for module libpq: This > Python has API version 1012, module libpq has version 1011. > > from libpq import * > > *** glibc detected *** free(): invalid pointer: 0xb7ed83c0 *** > This error message comes from using an extension module compiled for one Python major version in a different Python major version. Most likely you're using a binary install of pyPgSQL compiled for Python 2.3 under Python 2.4. One solution is to compile pyPgSQL for Python 2.4 yourself. - -- Gerhard -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEC+CgdIO4ozGCH14RAtNOAJkBpJ++CxVxuqi51n6Bd7YMR7ESGgCfevic stA+7Ky//ug8mcjR9AXcW8o= =Knwb -----END PGP SIGNATURE----- |
From: Sana T. <g5t...@cd...> - 2006-02-19 22:11:35
|
Hello, I'm having some strange troubles with pypgsql. When i run a python cgi script that tries to access the postgresql database when running on apache, i keep getting this error. ImportError: ./libpqmodule.so: undefined symbol: PyUnicodeUCS2_EncodeDecimal The funny thing is that i don't get this error when i run the python script without apache. Anyone know whats going on? Its been causing me lots of problems and i don't know how to get around it! Thanks, Sana |
From: mattia t. <mat...@ti...> - 2006-02-14 11:31:44
|
Hi all! I'm trying to install trac 0.9.3 on a Linux RedHat Enterprise 4 server, I got Python 2.3 and 2.4 and I've installed pyPgSQL 2.3 and 2.4 respectively. When I try to create a new environment with the trac-admin command, this use pyPgSQL to connect to a PgSQL database but it gives me the next error: /usr/lib/python2.4/site-packages/pyPgSQL/libpq/__init__.py:23: RuntimeWarning: Python C API version mismatch for module libpq: This Python has API version 1012, module libpq has version 1011. from libpq import * *** glibc detected *** free(): invalid pointer: 0xb7ed83c0 *** Aborted I don't know how to solve this problem, anyone has some ideas??? Thanks in advance. Mattia |
From: Karsten H. <Kar...@gm...> - 2006-01-17 18:36:47
|
Hi all, we are using pyPgSQL with GNUmed. When storing a PgDateTime instance (an mx.DateTime instance, that is) the _quote() function does "'%s'" % value Which in turn calls value.__str__(). Wich does the following to its seconds value: "%02.5f" % self.seconds In some locales (namely where the decimal separator is something other than a ".") this will produce (for example) "25,12" instead of "25.12" for 25 seconds 120 milliseconds Now, PostgreSQL (properly ?) does not accept "xx,yy" as a valid timestamp string. Attached find a crude hack overcoming this problem. The proper fix would be to make pyPgSQL *force* mx.DateTime to deliver a valid format via value.Format('...') or some such. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Karsten H. <Kar...@gm...> - 2006-01-03 16:29:43
|
Hello pyPgSQL team, when pyPgSQL is used with the locale "de_DE" under Python 2.4 it has a subtle bug. Basically, when a timestamp is written into a timestamp column an error is raised. It claims "invalid format for timestamp", IOW, PostgreSQL cannot parse the value as a timestamp. The reason is that there is a "," between the seconds and microseconds part of the value instead of a dot (.). How does this come about ? The _quote() function of pyPgSQL simply does "'%s'" % value to mx.DateTime.DateTime instances. The Python reference says that %s invokes str() on any Python object. Now, str() on a DateTime from mx invokes __str__ on it which does this: return "%04d-%02d-%02d %02d:%02d:%05.2f" % ( self.year, self.month, self.day, self.hour, self.minute, self.second ) Now, in a German locale the decimal point isn't represented by a dot. It is represented by a comma (","). Hence '%05.2f' % 27.77 ends up being '27,77' (which PG cannot parse) instead of '27.77' (which PG *can* parse). The proper solution, IMO, is to explicitely format mx.DateTime instances into something PG is guarantueed to understand inside pyPgSQL._quote(). Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Vladimir <pri...@uk...> - 2005-09-30 06:09:34
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gerhard H=C3=A4ring =D0=BF=D0=B8=D1=88=D0=B5=D1=82: > Vladimir wrote: >=20 >> Hi ALL! >> A short question. >> When may I can see compiled module pyPgSQL-2.4.win32-py2.4.exe >=20 >=20 > I've built Windows binaries for Python 2.4 for the current CVS version > of pyPgSQL. They're statically linked against the PostgreSQL 8.0.3 > client libraries (without SSL support) and uploaded them to Sourceforge > this time: >=20 > http://sourceforge.net/project/showfiles.php?group_id=3D16528 >=20 > HTH, >=20 > -- Gerhard >=20 >=20 >=20 >=20 Hi Gerhard! Thank you so much! I think I'm not alone, who need this version Vladimir -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFDO4QXBuLowOhEc0oRAiGrAKCQ2oQBw+8/hyGvr9qV9EtejbWpwACeMB4I Jtowdu8pJ8cITSII1LCThQg=3D =3D0ozP -----END PGP SIGNATURE----- |
From: <gh...@gh...> - 2005-09-26 08:23:21
|
Vladimir wrote: > Hi ALL! > A short question. > When may I can see compiled module pyPgSQL-2.4.win32-py2.4.exe I've built Windows binaries for Python 2.4 for the current CVS version of pyPgSQL. They're statically linked against the PostgreSQL 8.0.3 client libraries (without SSL support) and uploaded them to Sourceforge this time: http://sourceforge.net/project/showfiles.php?group_id=16528 HTH, -- Gerhard |
From: Vladimir <pri...@uk...> - 2005-09-21 13:41:52
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi ALL! A short question. When may I can see compiled module pyPgSQL-2.4.win32-py2.4.exe May be there are some problems? Thanks for advance -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFDMWI6BuLowOhEc0oRAqhaAKCJiioJgekfVbzq+/mr9tk8MUVKqACfU3FK tO6xpc4KiTjdIXlPYLsuURY= =gXKs -----END PGP SIGNATURE----- |
From: larry p. <la...@gm...> - 2005-08-31 22:52:48
|
I'm using a dictionary interpolated string to create a query statement. When I generate the query by hand sql =3D query % params and execute the sql from inside psql, it works. when i create cursor and use cursor.execute(query,params) i get the following traceback Traceback (most recent call last): File "<stdin>", line 1, in ? File "/tmp/python-5481BIu.py", line 134, in insert_title File "/usr/lib/python2.4/site-packages/pyPgSQL/PgSQL.py", line 3048, in execute self.res =3D self.conn.conn.query(_qstr % parms) TypeError: float argument required line 3048 seems to be where the Exception is handled, but the 'self.res =3D ...' line occurs in method callproc, and I don't see where it gets called from inside the method execute. the only oddball column is one that is of type numeric(9,2) do I need to wrap this in PgNumeric before I hand the dictionary to execute Or is there something deepr going on? --=20 http://Zoneverte.org -- information explained Do you know what your IT infrastructure does? |
From: Billy G. A. <bil...@de...> - 2005-08-31 03:24:42
|
On Tue, 2005-08-30 at 16:44 +0000, Rod MacNeil wrote: > Hi All, > > I'm using pyPgSQL 2.4 postgres 7.4.7 on fedora core 3. > > I have a question about the data type returned in cursor.description when oid is included in the select. > > When I run a query like "select oid, field1,field2 from mytable where field2='x'". > > cursor.description says the oid column is type 'rowid' if there are result rows, but if there are no result rows it says the type is 'blob'. > > Is this a bug? > > Rod MacNeil Not really. It's caused by the fact that an OID can represent a rowid or a postgresql large object (i.e. blob). The only way to tell which one it is is to see if the value of the OID exists as a large object or not. Since there is no result (and hence not value) the lookup can't be made so a type of blob is assumed. -- ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
From: Rod M. <rma...@in...> - 2005-08-30 16:45:06
|
Hi All, I'm using pyPgSQL 2.4 postgres 7.4.7 on fedora core 3. I have a question about the data type returned in cursor.description = when oid is included in the select. When I run a query like "select oid, field1,field2 from mytable where = field2=3D'x'". cursor.description says the oid column is type 'rowid' if there are = result rows, but if there are no result rows it says the type is 'blob'. Is this a bug? Rod MacNeil =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D NOTE: This e-mail message is intended only for the named recipient(s) above and may contain information that is privileged, confidential and/or exempt from disclosure under applicable law. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D |
From: <gh...@fm...> - 2005-08-17 14:30:20
|
Hi all. I want to send some dictionary as an argument for my plpythonu function. Let's say CREATE OR DECLARE FUNCTION test(__dictionary__) but cant figure out if it is possible or not. Is it? Thanks a lot. --=20 Gerardo Herzig Direccion General de Organizacion y Sistemas Facultad de Medicina U.B.A. |
From: Milton i. <min...@gm...> - 2005-06-16 00:46:10
|
hello: they know I am rescuing values from the following table: CREATE TABLE afp ( nombre_afp varchar(75) NOT NULL, razon_social_afp varchar(75), porcentaje_afp float8, CONSTRAINT pk_afp PRIMARY KEY (nombre_afp) ) by means of the console psql, I make the following thing: remunex=3D# select * from afp; nombre_afp | razon_social_afp | porcentaje_afp ------------+------------------+---------------- G | | 1.9 AFP | AFP S.A. | 1.7 GT | | 2.6 (3 filas) it is to say float's are ok!=20 now, I make the following thing from python def lista_datos(self): self.modelo.clear() sql=3D"""SELECT nombre_afp,razon_social_afp, porcentaje_afp FROM afp ORDER BY nombre_afp """ self.cursor.execute(sql) r=3Dself.cursor.fetchall() print r and the result is the following one: [['AFP', 'AFP S.A.', 1.0], ['G', '', 1.0], ['GT', '', 2.0]] and... aca is the error, if they pay attention the part decimal of float is in zero, somebody can help me with this... could be some configuration of global variables....esto before did not pass (python to me 2.3.5) and now with python2.4 it is happening to me, salu2! --=20 Milton Inostroza Aguilera |
From: Milton i. <min...@gm...> - 2005-06-03 09:25:31
|
thanks, in your source find the from pyPgSQL.PgSQL import PgBytea :D 2005/6/3, Milton inostroza <min...@gm...>: > hello: you know I use pypgSql to connect to me with the data base, > sincerely observes your I source but I do not understand much because > I am nascent in this. If your you were so gentile to show a concrete > example to me of as I can do it through string of connection created > through pypgsql, much would thank for you, since this is I complete it > that I need to finish the system that I am developing. salu2! >=20 > 2005/6/2, Karsten Hilbert <Kar...@gm...>: > > > On Thu, Jun 02, 2005 at 12:39:09PM -0400, Milton inostroza wrote: > > > > > http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/b= usiness/gmMedDoc.py?rev=3D1.2&content-type=3Dtext/vnd.viewcvs-markup > > Sorry, this link doesn't work, please try: > > > > http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/bu= siness/gmMedDoc.py > > > > and follow the "view" link. > > > > Karsten > > -- > > GPG key ID E4071346 @ wwwkeys.pgp.net > > E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by Yahoo. > > Introducing Yahoo! Search Developer Network - Create apps using Yahoo! > > Search APIs Find out how you can build Yahoo! directly into your own > > Applications - visit http://developer.yahoo.net/?fr=3Doffad-ysdn-ostg-q= 22005 > > _______________________________________________ > > Pypgsql-users mailing list > > Pyp...@li... > > https://lists.sourceforge.net/lists/listinfo/pypgsql-users > > >=20 > -- > Milton Inostroza Aguilera > Secretario Academico Centro de Alumnos > Encargado de Auspicios y Patrocinios - 6to. Encuentro Nacional de Linux > Desarrollador de RemuneX (sistema de remuneraciones amparado bajo GPL) >=20 --=20 Milton Inostroza Aguilera Secretario Academico Centro de Alumnos Encargado de Auspicios y Patrocinios - 6to. Encuentro Nacional de Linux Desarrollador de RemuneX (sistema de remuneraciones amparado bajo GPL) |
From: Milton i. <min...@gm...> - 2005-06-03 08:45:32
|
hello: you know I use pypgSql to connect to me with the data base, sincerely observes your I source but I do not understand much because I am nascent in this. If your you were so gentile to show a concrete example to me of as I can do it through string of connection created through pypgsql, much would thank for you, since this is I complete it that I need to finish the system that I am developing. salu2! 2005/6/2, Karsten Hilbert <Kar...@gm...>: > > On Thu, Jun 02, 2005 at 12:39:09PM -0400, Milton inostroza wrote: >=20 > > http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/bus= iness/gmMedDoc.py?rev=3D1.2&content-type=3Dtext/vnd.viewcvs-markup > Sorry, this link doesn't work, please try: >=20 > http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/busi= ness/gmMedDoc.py >=20 > and follow the "view" link. >=20 > Karsten > -- > GPG key ID E4071346 @ wwwkeys.pgp.net > E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network - Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications - visit http://developer.yahoo.net/?fr=3Doffad-ysdn-ostg-q22= 005 > _______________________________________________ > Pypgsql-users mailing list > Pyp...@li... > https://lists.sourceforge.net/lists/listinfo/pypgsql-users >=20 --=20 Milton Inostroza Aguilera Secretario Academico Centro de Alumnos Encargado de Auspicios y Patrocinios - 6to. Encuentro Nacional de Linux Desarrollador de RemuneX (sistema de remuneraciones amparado bajo GPL) |
From: Karsten H. <Kar...@gm...> - 2005-06-02 18:20:21
|
> On Thu, Jun 02, 2005 at 12:39:09PM -0400, Milton inostroza wrote: > http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/business/gmMedDoc.py?rev=1.2&content-type=text/vnd.viewcvs-markup Sorry, this link doesn't work, please try: http://savannah.gnu.org/cgi-bin/viewcvs/gnumed/gnumed/gnumed/client/business/gmMedDoc.py and follow the "view" link. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |