You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(9) |
Nov
(18) |
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(11) |
Feb
(7) |
Mar
(7) |
Apr
(7) |
May
(17) |
Jun
(33) |
Jul
(19) |
Aug
(3) |
Sep
(19) |
Oct
(18) |
Nov
(27) |
Dec
(13) |
2003 |
Jan
(12) |
Feb
(8) |
Mar
(11) |
Apr
(42) |
May
(19) |
Jun
(49) |
Jul
(23) |
Aug
(12) |
Sep
(2) |
Oct
(8) |
Nov
(8) |
Dec
(13) |
2004 |
Jan
(13) |
Feb
(2) |
Mar
(7) |
Apr
(7) |
May
(6) |
Jun
(13) |
Jul
(4) |
Aug
(12) |
Sep
(6) |
Oct
(6) |
Nov
(50) |
Dec
(10) |
2005 |
Jan
(20) |
Feb
(20) |
Mar
(2) |
Apr
(10) |
May
(12) |
Jun
(7) |
Jul
|
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(6) |
May
(3) |
Jun
(17) |
Jul
(2) |
Aug
(8) |
Sep
(1) |
Oct
(5) |
Nov
(3) |
Dec
(2) |
2007 |
Jan
(2) |
Feb
|
Mar
|
Apr
(2) |
May
(1) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2008 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2009 |
Jan
(2) |
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Rob Brown-B. <ro...@zo...> - 2003-07-29 20:39:27
|
Hi, Running the following SQL via pypgsql: INSERT INTO "songs" ("p_key", "title", "artist", "date", "album", "tracknumber", "time", "file") VALUES ("ea0e89f596619af1837f424c0767ffd9", "Sugar Mountain", "Neil Young", "1979", "Live Rust", "1", "302.226666667", "/oggs/Neil_Young-Sugar_Mountain.ogg"); I get this error: Execute failed ERROR: Attribute 'ea0e89f596619af1837f424c0767ffd' not found Rolling back a transacton followed by this: libpq.Warning: NOTICE: identifier "ea0e89f596619af1837f424c0767ffd9" will be truncated to "ea0e89f596619af1837f424c0767ffd" the p_key column is a varchar with a length of 64, and if I cut and paste the SQL string into phpPgAdmin it inserts without a problem. Any clues? -- * Rob Brown-Bayliss * ================= * zoism.org |
From: Dick K. <D.J...@ch...> - 2003-07-28 14:48:57
|
Hi Gerhard, The python server would be neat, if we could pull it off. When you get going, send me some notes, I would like to participate. In the mean time the users will have to use the system the old fashioned way (with a single database and communication facilities). Kind regards, Dick |
From: <gh...@gh...> - 2003-07-28 11:52:51
|
Dick Kniep wrote: > [...] So, what I want is > a local connection to the database, with asynchronous updates to > databases that are in the same 'logical' cluster of databases. c-jdbc > looks like the proper candidate for this action, The description of c-jdbc looks neat, however I have my doubts wether it can work reliably because database replication is a very complex topic. I have no experience in it myself, but I briefly talked with a colleague who did create a whitepaper for adding database replication to an existing Oracle database. All I can say is that it gets complicated really quick. > however no luck because > we developed using Python..... Well, there are multiple possibilities to use c-jdbc from Python nevertheless. I'd recommend to create a Java server proxy that uses JDBC and that you access from Python via CORBA, XML/RPC or SOAP. Alternatively, c-jdbc being open-source one could look at the algorithms used and create a similar library for Python. I had something like this in mind recently :) > [...] I have looked at SQLRelay, and it does address some, but not all the > requirements, because there is no mechanism to direct update queries > transparently to other machines. Off course I want to hide the > complexity of updating multiple databases from my application. It looks > as though it can be used with pyPgsql. Is that the case? No, SQLRelay cannot be used with pyPgSQL. There's, however, a DB-API Python library to access a SQLRelay backend. It doesn't have pyPgSQL's special features like Unicode, PgResultSets etc. of course. -- Gerhard |
From: Dick K. <D.J...@ch...> - 2003-07-28 10:32:10
|
On Sat, 2003-07-26 at 00:51, Karsten Hilbert wrote: Hi Karsten I understand that I am comparing apples and oranges. And I do not expect that anything is done. However, I like the discussion, and possibly, unexpectedly, an answer comes from someone on the list. Maybe I should have made it clear why I posed this strange question. We have developed a product which uses Postgresql as a database, Python, wxPython and Pypgsql. This works OK, no problems whatsoever. However, some larger companies want to use the application as well. One company has some 15 locations where people are working. Off course this can be supported with high bandwidth online connections, but these connections are expensive, and unreliable. Therefor I have been looking for a way to get rid of the direct online connection. So, what I want is a local connection to the database, with asynchronous updates to databases that are in the same 'logical' cluster of databases. c-jdbc looks like the proper candidate for this action, however no luck because we developed using Python..... Gains from this setup would be: - if the connection goes down, work can continue - all databases are mirrors of each other, so if a server has a hardware problem, by rerouting the trafic, work can continue - performance of the local setups would be significantly better - Predictability of the responsetime would be better - lower bandwidth requirements, as the updates are asynchronously done On the downside: - Extra hardware (diskspace and servers) is required - complexity of the updates to all databases - a short delay (minutes) between updates to one database and updates to the others I have looked at SQLRelay, and it does address some, but not all the requirements, because there is no mechanism to direct update queries transparently to other machines. Off course I want to hide the complexity of updating multiple databases from my application. It looks as though it can be used with pyPgsql. Is that the case? Thanks for anyone responding Dick Kniep |
From: Billy G. A. <bil...@mu...> - 2003-07-27 02:37:52
|
Announce: pyPgSQL - Version 2.4 is released. =========================================================================== pyPgSQL v2.4 has been released. It is available at http://pypgsql.sourceforge.net. pyPgSQL is a package of two (2) modules that provide a Python DB-API 2.0 compliant interface to PostgreSQL databases. The first module, libpq, is written in C and exports the PostgreSQL C API to Python. The second module, PgSQL, provides the DB-API 2.0 compliant interface and support for various PostgreSQL data types, such as INT8, NUMERIC, MONEY, BOOL, ARRAYS, etc. This module is written in Python and works with PostgreSQL 7.0 or later and Python 2.0 or later. It was tested with PostgreSQL 7.0.3, 7.1.3, 7.2.2, 7.3, Python 2.0.1, 2.1.3 and 2.2.2. Note: It is highly recommended that you use PostgreSQL 7.2 or later and Python 2.1 or later. If you want to use PostgreSQL Large Objects under Python 2.2.x, you *must* use Python 2.2.2, or later because of a bug in earlier 2.2 versions. Project homepages: pyPgSQL: http://pypgsql.sourceforge.net/ PostgreSQL: http://www.postgresql.org/ Python: http://www.python.org/ --------------------------------------------------------------------------- ChangeLog: =========================================================================== Changes since pyPgSQL Version 2.3 ================================= =-=-=-=-=-=-=-=-=-=-=-=-=- ** IMPORTANT NOTE ** =-=-=-=-=-=-=-=-=-=-=-=-=-= NOTE: There is a change to the Connection.binary() function that *could* cause existing code to break. Connection.binary() no longer commits the transaction used to create the large object. The application developer is now responsible for commiting (or rolling back) the transaction. -=-=-=-=-=-=-=-=-=-=-=-=-= ** IMPORTANT NOTE ** -=-=-=-=-=-=-=-=-=-=-=-=-=- Changes to README ----------------- * Updates for 2.4. Changes to PgSQL.py ------------------- * Applied patch from Laurent Pinchart to allow _quote to correctly process objects that are sub-classed from String and Long types. * Change the name of the quoting function back to _quote. Variables named like __*__ should be restrict to system names. * PgTypes is now hashable. repr() of a PgType will now return the repr() of the underlying OID. * Connection.binary() will now fail if autocommit is enabled. * Connection.binary() will no longer commit the transaction after creating the large object. The application developer is now responsible for commiting (or for rolling back) the transaction [Bug #747525]. * Added PG_TIMETZ to the mix [Patch #708013]. * Pg_Money will now accept a string as a parameter. * PostgreSQL int2, int, int4 will now be cast into Python ints. Int8 will be cast into a Python long. Float4, float8, and money types will be cast into a Python float. * Correct problem with the PgNumeric.__radd__ method. [Bug #694358] * Correct problem with conversion of negitive integers (with a given scale and precision) to PgNumerics. [Bug #694358] * Work around a problem where the precision and scale of a query result can be different from the first result in the result set. [Bug #697221] * Change the code so that the display length in the cursor.description attribute is always None instead of '-1'. * Fixed another problem with interval <-> DateTimeDelta casting. * Corrected a problem that caused the close of a portal (ie. PostgreSQL cursor) to fail. * Corrected a problem with interval <-> DateTimeDelta casting. [Bug #653044] * Corrected problem found by Adam Buraczewski in the __setupTransaction function. * Allow both 'e' and 'E' to signify an exponent in the PgNumeric constructor. * Correct some problems that were missed in yesterday's fixes (Thanks, Adam, for the help with the problems) Changes to libpqmodule.c ------------------------ * On win32, we usually statically link against libpq. Because of fortunate circumstances, a problem didn't show up until now: we need to call WSAStartup() to initialize the socket stuff from Windows *in our module* in order for the statically linked libpq to work. I just took the relevant DllMain function from the libpq sources and put it here. * Modified some comments to reflect reality. * Applied patch from Laurent Pinchart: In libPQquoteString, bytea are quoted using as much as 5 bytes per input byte (0x00 is quoted '\\000'), so allocating (slen * 4) + 3 is not enough for data that contain lots of 0x00 bytes. * Added PG_TIMETZ to the mix [Patch #708013]. Changes to pgboolean.c ---------------------- * Change the name of the quoting function back to _quote. __*__ type names should be restricted to system names. Changes to pgconnection.c ------------------------- * Applied patch by Laurent Pinchart to correct a problem lo_import, lo_export, lo_unlink. * In case PQgetResult returns NULL, let libPQgetResult return a Python None, like the docstring says. This is necessary in order to be able to cancel queries, as after cancelling a query with PQrequestCancel, we need to read results until PQgetResult returns NULL. Changes to pglargeobject.c -------------------------- * Change the name of the quoting function back to _quote. __*__ type names should be restricted to system names. Changes to pgnotify.c --------------------- * Fixed a bug in the code. The code in question use to work, but doesn't anymore (possible change to libpq?). -- ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
From: <gh...@gh...> - 2003-07-27 01:03:45
|
Dick Kniep wrote: > On Sat, 2003-07-26 at 01:29, Gerhard H=E4ring wrote: >=20 >>Interesting as this library may be, I have no idea why you post this to= =20 >>the pyPgSQL-users list. >=20 > Hi Gerhard, >=20 > Yep, maybe complete off-topic. However, wouldn't it be nice if we could > use our Python programs with Pypgsql using this library? I think I'm > daydreaming..... Yes you are daydreaming, if not hallucinating :-P As far as I see this, this is just a Java library (I haven't read much,=20 let alone downloaded and tried it). To use Java libraries, there's Jython= . To use Java stuff from CPython, there's JPE, but I don't know (and=20 doubt) wether it fully works. JPE certainly doesn't look to be well=20 supported from what I can judge from its CVS tree. -- Gerhard |
From: Dick K. <D.J...@ch...> - 2003-07-26 21:41:46
|
On Sat, 2003-07-26 at 01:29, Gerhard H=E4ring wrote: > Interesting as this library may be, I have no idea why you post this to=20 > the pyPgSQL-users list. Hi Gerhard, Yep, maybe complete off-topic. However, wouldn't it be nice if we could use our Python programs with Pypgsql using this library? I think I'm daydreaming..... Kind regards, Dick >=20 > -- Gerhard |
From: <gh...@gh...> - 2003-07-25 23:29:33
|
Dick Kniep wrote: > Hi list, > > C-jdbc is a tool that makes access possible from Java to clustered > databases. [...] Interesting as this library may be, I have no idea why you post this to the pyPgSQL-users list. -- Gerhard |
From: Karsten H. <Kar...@gm...> - 2003-07-25 23:00:45
|
Dick, > Is there a possibility that you folks could provide something like a > connection to c-jdbc, in such a way that we could use its potential > also? Maybe I am wrong here but as I understand it c-jdbc and pyPgSQL are at the same level of abstraction (with c-jdbc following the jdbc model of things and pyPgSQL following the Python DB-API 2.0 model of things). It seems to be a weird mix to try to connect the latter to the former (since they are both designed to connect to the same thing - a SQL-driven RDBMS). Does "sqlrelay" address some of your needs, however ? It's a DB-API Python module that provides a few goodies resembling some of the things you mention about c-jdbc. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Dick K. <D.J...@ch...> - 2003-07-25 22:32:55
|
Hi list, C-jdbc is a tool that makes access possible from Java to clustered databases. This provides load balancing, hot failover and all kinds of other goodies that are most welcome to sites that are working on different locations. Is there a possibility that you folks could provide something like a connection to c-jdbc, in such a way that we could use its potential also? I haven't got a clue how to go about in this, as I am a python programmer and certainly not a C or Java programmer, but if I can help in any way if someone is willing to take a look at this, I most certainly will. Kind regards, Dick Kniep |
From: Karsten H. <Kar...@gm...> - 2003-07-25 07:20:22
|
> If in your Python code you use Unicode strings, you need to: > a) use client_encoding parameter in connect() call > b) tell the PostgreSQL backend with "SET CLIENT_ENCODING TO ..." > > if in your Python code you use only byte strings, you need to: > - tell the PostgreSQL backend with "SET CLIENT_ENCODING TO ..." > > All clear now? Yes. Thank you. I will now try to understand the *reason* by reading the source. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: <gh...@gh...> - 2003-07-24 23:29:33
|
Karsten Hilbert wrote: > Hello Gerhard, > >>>I'm having trouble with Unicode. I have set up the database to use UTF-8. >>>I am trying to get some non-ASCII strings in there and I am failing. > > >>>dBconnection=PgSQL.connect("::ctanWeb:ftpmaint:") >> >>You need to tell *pyPgSQL*, which client encoding to use. Use the >>parameter client_encoding="utf-8". > > Is this also necessary if I use a client encoding of, say, > "latin1" or "iso-8859-15" or some such? [...] If in your Python code you use Unicode strings, you need to: a) use client_encoding parameter in connect() call b) tell the PostgreSQL backend with "SET CLIENT_ENCODING TO ..." if in your Python code you use only byte strings, you need to: - tell the PostgreSQL backend with "SET CLIENT_ENCODING TO ..." All clear now? Or should I explain in more detail? -- Gerhard |
From: Karsten H. <Kar...@gm...> - 2003-07-24 22:40:21
|
Hello Gerhard, >> I'm having trouble with Unicode. I have set up the database to use UTF-8. >> I am trying to get some non-ASCII strings in there and I am failing. >> dBconnection=PgSQL.connect("::ctanWeb:ftpmaint:") > You need to tell *pyPgSQL*, which client encoding to use. Use the > parameter client_encoding="utf-8". Is this also necessary if I use a client encoding of, say, "latin1" or "iso-8859-15" or some such ? What I mean is: It makes sense to tell pyPgSQL that my Python code uses "utf-8" in strings so it knows that it has to deal with u''-strings rather than ''-strings but why would I need this if I use latin1 in ''-strings ? In fact, pyPgSQL might be able to distinguish u'' from ''-strings by a type check (u'' should return "unicode", shouldn't it ?). So I wonder why I need to tell *pyPgSQL* what encoding I use ? (I understand why I need to tell PostgreSQL about that.) I would be thinking that pyPgSQL just hands it's input to libq. Or does it need to know about the encoding so it knows how to properly quote/escape input ? > To also get back Unicode strings for > text columns, use the parameter unicode_results=1. That makes sense in that I will get back u'' strings instead of ''-strings. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Billy G. A. <bil...@mu...> - 2003-07-04 05:56:52
|
Luc Stepniewski wrote: >I put some data in a table, which has a record of type 'bytea'. >I inserted that data using the PgSQL.PgQuoteBytea method. > >When I want to extract the data, (I guess) I need, just after retrieving the >result(s) from a SELECT, to use PgSQL.PgUnQuoteBytea. >But when I use it, I get the error "PgUnQuoteString() argument 1 must be >string, not instance' ". >Doing a "print myinstance.__class__" prints 'pyPgSQL.PgSQL.PgBytea'. So I >guess it's ok, so why do I get an error? >Here is the snippet of code I used: > >=========== > recup = """SELECT docbody FROM docs where docid=%s """ % > PgSQL.PgQuoteString(form.getvalue('docid')) > cur.execute(recup) > res = cur.fetchall() > print PgSQL.PgUnQuoteBytea(res[0]['docbody']) >=========== > >Looking at the documentation for the methods, I see that PgUnQuoteBytea takes >a string as parameter (!?), so if I try to stringify it, I get the following >error: "Bad input string for type bytea", doing this: >=========== >c = `res[0]['docbody']` >d = PgSQL.PgUnQuoteBytea(c) >=========== > > >Any idea? > > You are trying to do thingx which pyPgSQL is already doing for you. To print a field that is defined in the database as bytea, just print it. It's already been processed by PyUnQuoteBytea. In your example: rcup = "SELECT docbody FROM docs WHERE docid = %s" cur.execute(recup, form.getvalue('docikk')) res - cur.fetchall() print res[0]['docbody'] You should not have to use PgQuotString() or PgUnQuoteBytea() directly. In order to place a bytea value into the database, you can create a new PgBytea object and passs that object to execute as a paramter: cur.execute("""INSERT INTO docs (docid, docbody) VALUES(%s, %s)""", docid, PgBytea(docbody)) I hope this clarifies things for you. -- ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
From: Luc S. <luc...@ad...> - 2003-07-03 13:23:18
|
I put some data in a table, which has a record of type 'bytea'. I inserted that data using the PgSQL.PgQuoteBytea method. When I want to extract the data, (I guess) I need, just after retrieving the result(s) from a SELECT, to use PgSQL.PgUnQuoteBytea. But when I use it, I get the error "PgUnQuoteString() argument 1 must be string, not instance' ". Doing a "print myinstance.__class__" prints 'pyPgSQL.PgSQL.PgBytea'. So I guess it's ok, so why do I get an error? Here is the snippet of code I used: =========== recup = """SELECT docbody FROM docs where docid=%s """ % PgSQL.PgQuoteString(form.getvalue('docid')) cur.execute(recup) res = cur.fetchall() print PgSQL.PgUnQuoteBytea(res[0]['docbody']) =========== Looking at the documentation for the methods, I see that PgUnQuoteBytea takes a string as parameter (!?), so if I try to stringify it, I get the following error: "Bad input string for type bytea", doing this: =========== c = `res[0]['docbody']` d = PgSQL.PgUnQuoteBytea(c) =========== Any idea? Thanks, Luc Stepniewski |
From: Billy G. A. <bil...@mu...> - 2003-06-28 20:17:30
|
Paolo Alexis Falcone wrote: >Whenever I try to access a table with many rows using PgSQL's >fetchall(), this happens: > > >>>>from pyPgSQL import PgSQL >>>>db = PgSQL.connect("192.168.0.8:5432:whitegold","dondon","dondon") >>>>PgSQL.NoPostgresCursor = 1 >>>>cur = db.cursor() >>>>cur.execute("SELECT * FROM customer") >>>>data = cur.fetchall() >>>> >>>> >Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 3106, >in fetchall > return self.__fetchManyRows(self._rows_, _list) > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2684, >in __fetchManyRows > _j = self.__fetchOneRow() > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2660, >in __fetchOneRow > _r.getvalue(self._idx_, _i))) > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 717, >in typecast > return PgNumeric(value, _p, _s) > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 1335, >in __init__ > raise OverflowError, "value too large for PgNumeric" >OverflowError: value too large for PgNumeric > >The aforementioned table, customer, only has 1023 entries with the >following structure: > >CREATE TABLE customer (ccustcode varchar(80), cgroupcode varchar(10), >clastname varchar(80), cfirstname varchar(80), cmi varchar(10), >ccompany varchar(80), caddress1 varchar(80), caddress2 varchar(80), >ccity varchar(80), cprovince varchar(80), czipcode varchar(10), iterms >integer, ycredit_limit numeric, npenalty_rate numeric, >default_routecode varchar(10), lisdirector boolean); > >PgSQL's fetchone() fortunately works though, as well as using >fetchall() on tables with few rows. Is there any alternative way of >using PyPgSQL that would not overflow in this situation? > Paolo, The problem is not the number of rows, but the fact the conversion of a PostgreSQL numeric to a PgNumeric is failing. This problem has been fixed in the code in the CVS repository for the pyPgSQL project <http://sourceforge.net/cvs/?group_id=16528> on SourceForge. We will also be releaseing a new version of pyPgSQL within the next couple of weeks. -- ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
From: Billy G. A. <bil...@mu...> - 2003-06-28 02:54:10
|
Karsten Hilbert wrote: >Billy, > >you stated that ; is a statement _separator_ in SQL which >makes sense. However, this: > > http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=sql-syntax.html#SQL-SYNTAX-LEXICAL > (1.1.4. Special Characters, 5th bullet) > >suggests otherwise. Given PGs track record of being (sanely) >anal about SQL conformity I wonder what to make of it. I >certainly am no expert on such matters. > >Thanks for your help, >Karsten > > First, I didn't say that, someone else on the list did ;-) The point your refering to state a semi-colon is a command terminater. Multiple commands can be entered (even on the same line) terminated (seperated) by a semi-colon. Commands are also terminated by the end of the input stream, which is why a semi-colon is not needed in the query given to the execute() method if there is only one command in the query. Note that adding the semi-colon to a single command query does not cause any problems, it just terminates the one command. I guess I'm trying to say is that if you equate 'statement' to 'command' and 'separator' to 'terminator', then there is no conflict. -- ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
From: Karsten H. <Kar...@gm...> - 2003-06-27 10:20:23
|
Billy, you stated that ; is a statement _separator_ in SQL which makes sense. However, this: http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=sql-syntax.html#SQL-SYNTAX-LEXICAL (1.1.4. Special Characters, 5th bullet) suggests otherwise. Given PGs track record of being (sanely) anal about SQL conformity I wonder what to make of it. I certainly am no expert on such matters. Thanks for your help, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Karsten H. <Kar...@gm...> - 2003-06-27 09:00:53
|
> No. It's probably entirely unrelated to your problem and just one of my > favourite nitpicks, but the semicolon is a statement *separator*, not an > end-of-statement character. The Python DB-API only allows to send one > statement per .execute() call, so don't use the semicolon when using the > DB-API. Thanks. This should be in the docs in bold letters. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Karsten H. <Kar...@gm...> - 2003-06-27 09:00:46
|
Billy, I am CC'ing this to the one of our developers who experienced the problem (under Debian unstable, with Python 2.2.*). He may be able to provide more detail. Regards, Karsten Ian, this is what I posted: >> suppose I want to find occurrences of "test" in a column >> "data" in a table "testtable". SQL would be: >> >> select * from testtable where data='test'; >> >> Rewriting that for use with pyPgSQL: >> >> query = "select * from testtable where data=%s;" >> arg = "test" >> curs.execute(query, arg) >> >> Right ? >> >> However, this will produce the query: >> >> select * from testtable where data='test;' >> >> which is wrong. >> >> Is this a bug or am I not using pyPgSQL properly (if so it >> needs to be in the docs) ? > I can't seem to re-create the problem you seem to be having: > > $ python > Python 2.2.2 (#7, Nov 27 2002, 17:10:05) [C] on openunix8 > Type "help", "copyright", "credits" or "license" for more information. > >>> from pyPgSQL import PgSQL > >>> cx = PgSQL.connect(password='********') > >>> cx.conn.toggleShowQuery > 'On' > >>> cu = cx.cursor() > QUERY: BEGIN WORK > >>> arg = '424022' > >>> cu.execute('select * from a where s = %s;', arg) > QUERY: DECLARE "PgSQL_0819742C" CURSOR FOR *select * from a where s = > '424022';* > QUERY: FETCH 1 FROM "PgSQL_0819742C" > QUERY: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 23 > QUERY: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 25 > >>> cu.close() > QUERY: CLOSE "PgSQL_0819742C" > >>> query = "select * from a where s = %s;" > >>> cu = cx.cursor() > >>> cu.execute(query, arg) > QUERY: DECLARE "PgSQL_081921B4" CURSOR FOR select * from a where s = > '424022'; > QUERY: FETCH 1 FROM "PgSQL_081921B4" > >>> cu.fetchone() > [152, 3, '424022'] > >>> > > Can you provide more detail of what you were doing when the problem occured? > You can show the query being sent to the backend by entering the > following command: > > cx.conn.toggleShowQuery > > Where cx is the connection object. If you can send the output of a > python session that shows the error, I will be able to provide more help. -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: <gh...@gh...> - 2003-06-27 05:04:24
|
Karsten Hilbert wrote: > Dear all, > > suppose I want to find occurrences of "test" in a column > "data" in a table "testtable". SQL would be: > > select * from testtable where data='test'; > > Rewriting that for use with pyPgSQL: > > query = "select * from testtable where data=%s;" > arg = "test" > curs.execute(query, arg) > > Right ? [...] No. It's probably entirely unrelated to your problem and just one of my favourite nitpicks, but the semicolon is a statement *separator*, not an end-of-statement character. The Python DB-API only allows to send one statement per .execute() call, so don't use the semicolon when using the DB-API. -- Gerhard |
From: <gh...@gh...> - 2003-06-27 05:00:37
|
Jim Hefferon wrote: > Hello, > > I'm having trouble with Unicode. I have set up the database to use UTF-8. > I am trying to get some non-ASCII strings in there and I am failing. > > Here is a short version of the script: > -------------------------------------- > #!/usr/bin/python -u > # test unicode support in pyPgSQL > import string; > from pyPgSQL import libpq > from pyPgSQL import PgSQL > > dBconnection=PgSQL.connect("::ctanWeb:ftpmaint:") First problem: You need to tell *pyPgSQL*, which client encoding to use. Use the parameter client_encoding="utf-8". To also get back Unicode strings for text columns, use the parameter unicode_results=1. > dBcursor=dBconnection.cursor() Second problem: you need to tell the PostgreSQL backend in which client encoding you send your data and want it back: dBcursor.execute("set client_encoding to unicode") > dBcursor.execute("SELECT * FROM authors") > print dBcursor.fetchone() # works fine; I'm connected > > # these two also work fine > dBcursor.execute("INSERT INTO authors (name,email) VALUES ('Mike Jones','mik...@ct...')") > dBcursor.execute("INSERT INTO authors (name,email) VALUES (%(name)s,%(email)s)",{'name':'Mike Jones','email':'mik...@ct...'}) > > # this does not work > dBcursor.execute("INSERT INTO authors (name,email) VALUES ('Mike Schr\xf6der','mik...@ct...')") This cannot work, because it is not a valid UTF-8 string. Consider: ISO-8859-1: 'Mike Schröder' UTF-8: 'Mike Schr\xc3\xb6der' > # I cannot get to run, even > dBcursor.execute("INSERT INTO authors (name,email) VALUES (%(name)s,%(email)s)",{'name':u'Mike Schr\xf6der','email':u'mik...@ct...'}) This, however, will work with my above adjustments. > [...] -- Gerhard |
From: Billy G. A. <bil...@mu...> - 2003-06-27 01:53:35
|
Karsten Hilbert wrote: >Dear all, > >suppose I want to find occurrences of "test" in a column >"data" in a table "testtable". SQL would be: > >select * from testtable where data='test'; > >Rewriting that for use with pyPgSQL: > >query = "select * from testtable where data=%s;" >arg = "test" >curs.execute(query, arg) > >Right ? > >However, this will produce the query: > > select * from testtable where data='test;' > >which is wrong. > >Is this a bug or am I not using pyPgSQL properly (if so it >needs to be in the docs) ? > >Thanks, >Karsten >PS: I do know how to rewrite the query so it works. I am just >wondering about the above behaviour. > > I can't seem to re-create the problem you seem to be having: $ python Python 2.2.2 (#7, Nov 27 2002, 17:10:05) [C] on openunix8 Type "help", "copyright", "credits" or "license" for more information. >>> from pyPgSQL import PgSQL >>> cx = PgSQL.connect(password='********') >>> cx.conn.toggleShowQuery 'On' >>> cu = cx.cursor() QUERY: BEGIN WORK >>> arg = '424022' >>> cu.execute('select * from a where s = %s;', arg) QUERY: DECLARE "PgSQL_0819742C" CURSOR FOR *select * from a where s = '424022';* QUERY: FETCH 1 FROM "PgSQL_0819742C" QUERY: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 23 QUERY: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 25 >>> cu.close() QUERY: CLOSE "PgSQL_0819742C" >>> query = "select * from a where s = %s;" >>> cu = cx.cursor() >>> cu.execute(query, arg) QUERY: DECLARE "PgSQL_081921B4" CURSOR FOR select * from a where s = '424022'; QUERY: FETCH 1 FROM "PgSQL_081921B4" >>> cu.fetchone() [152, 3, '424022'] >>> Can you provide more detail of what you were doing when the problem occured? You can show the query being sent to the backend by entering the following command: cx.conn.toggleShowQuery Where cx is the connection object. If you can send the output of a python session that shows the error, I will be able to provide more help. -- ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bil...@mu... | /| | 7436 Hartwell | MSN.......: B_G...@em... |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | |
From: Karsten H. <Kar...@gm...> - 2003-06-26 20:07:36
|
Dear all, suppose I want to find occurrences of "test" in a column "data" in a table "testtable". SQL would be: select * from testtable where data='test'; Rewriting that for use with pyPgSQL: query = "select * from testtable where data=%s;" arg = "test" curs.execute(query, arg) Right ? However, this will produce the query: select * from testtable where data='test;' which is wrong. Is this a bug or am I not using pyPgSQL properly (if so it needs to be in the docs) ? Thanks, Karsten PS: I do know how to rewrite the query so it works. I am just wondering about the above behaviour. -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 |
From: Jim H. <ftp...@jo...> - 2003-06-20 20:17:31
|
Hello, I'm having trouble with Unicode. I have set up the database to use UTF-8. I am trying to get some non-ASCII strings in there and I am failing. Here is a short version of the script: -------------------------------------- #!/usr/bin/python -u # test unicode support in pyPgSQL import string; from pyPgSQL import libpq from pyPgSQL import PgSQL dBconnection=PgSQL.connect("::ctanWeb:ftpmaint:") dBcursor=dBconnection.cursor() dBcursor.execute("SELECT * FROM authors") print dBcursor.fetchone() # works fine; I'm connected # these two also work fine dBcursor.execute("INSERT INTO authors (name,email) VALUES ('Mike Jones','mik...@ct...')") dBcursor.execute("INSERT INTO authors (name,email) VALUES (%(name)s,%(email)s)",{'name':'Mike Jones','email':'mik...@ct...'}) # this does not work dBcursor.execute("INSERT INTO authors (name,email) VALUES ('Mike Schr\xf6der','mik...@ct...')") # I cannot get to run, even dBcursor.execute("INSERT INTO authors (name,email) VALUES (%(name)s,%(email)s)",{'name':u'Mike Schr\xf6der','email':u'mik...@ct...'}) ------------------------------------- The line that does not work gives me the error message: Traceback (most recent call last): File "./test_uni.py", line 25, in ? dBcursor.execute("INSERT INTO authors (name,email) VALUES ('Mike Schr\xf6der','mik...@ct...')") File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2956, in execute raise OperationalError, msg libpq.OperationalError: ERROR: Unicode >= 0x10000 is not supoorted I cannot make it out: is that a PostgreSQL objection or does it come from PyPgSQL? (By the way, the 'supoorted' is not me.) So my questions are: (1) Is Pg unable to take this character? (2) What is the right format for a dictionary substitution? I can't get the final line past an ASCII encoding error: ordinal not in range. Thanks for any help at all; I tried looking in the archives of this list but I had no luck, Jim Hefferon |