You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(4) |
Aug
(9) |
Sep
(22) |
Oct
(21) |
Nov
(20) |
Dec
(17) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(5) |
Mar
(30) |
Apr
(1) |
May
(5) |
Jun
(5) |
Jul
(2) |
Aug
(1) |
Sep
(2) |
Oct
(13) |
Nov
(7) |
Dec
(11) |
2004 |
Jan
(5) |
Feb
(6) |
Mar
(6) |
Apr
(2) |
May
(4) |
Jun
(2) |
Jul
(10) |
Aug
(17) |
Sep
(6) |
Oct
(10) |
Nov
(3) |
Dec
|
2005 |
Jan
(1) |
Feb
(29) |
Mar
(6) |
Apr
(5) |
May
(5) |
Jun
(2) |
Jul
|
Aug
(2) |
Sep
|
Oct
(5) |
Nov
|
Dec
|
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(27) |
Jul
(2) |
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
(3) |
2007 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(9) |
May
(2) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
(7) |
2008 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: SourceForge.net <no...@so...> - 2003-10-24 19:23:43
|
Bugs item #829744, was opened at 2003-10-24 13:52 Message generated for change (Comment added) made by mcfletch You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=829744&group_id=16528 Category: PgInt8 Group: None Status: Open Resolution: None Priority: 5 Submitted By: Mike C. Fletcher (mcfletch) Assigned to: Nobody/Anonymous (nobody) Summary: Int8 values interpreted as int4 Initial Comment: Bigint values inserted from long positive integers in the range 2**31 through 2**32 are retrieved as negative values. Values from 2**32 up are retrieved as value shifted 32 bits right. Result of running the attached script: V:\cinemon>bigint_problem.py -2147483647 -2147483648 0 1 where the input values were: {'curr_up':2147483649L}, {'curr_up':2147483648L}, {'curr_up':4294967296L}, {'curr_up':4294967297L}, PyPgSQL 2.4 Win32 binary release for Python 2.2.3 against PostgreSQL 7.3 on Win32. The values are getting into the database fine, so it looks like something in the database-> python translation that's getting messed up: V:\cinemon>psql cinemon Welcome to psql 7.3.4, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit cinemon=# select esn,curr_up from terayon_in where esn='someserial'; esn | curr_up ------------+------------ someserial | 2147483649 someserial | 2147483648 someserial | 4294967296 someserial | 4294967297 (4 rows) cinemon=# \d terayon_in Table "public.terayon_in" Column | Type | Modifiers -----------+--------------------------+---------------------------------- esn | character varying | curr_up | bigint | default 0 curr_down | bigint | default 0 ... ---------------------------------------------------------------------- >Comment By: Mike C. Fletcher (mcfletch) Date: 2003-10-24 15:23 Message: Logged In: YES user_id=34901 This appears to only be a problem with the libpq version of PgInt8, if line 1843 of PgSQL is altered to read: if 1: so that the Python version is always used, the results come back with the proper values. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=829744&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-10-06 18:44:37
|
Bugs item #816729, was opened at 2003-10-02 14:16 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 Category: None Group: None Status: Pending Resolution: Fixed Priority: 5 Submitted By: jeff putnam (jefu) Assigned to: Billy G. Allie (ballie01) Summary: 7.4beta Initial Comment: I'm trying to use this with postgresql 7.4beta3 and ran into several version related bugs (none seemed to really stem from any sql problems, that is). First I found the same bug as bug number 786712. My fix was a bit different - I changed pgversion.c around line 183 to : /* Allow for alpha and beta versions */ if (((*last == 'a') || (*last == 'b')) && (isdigit(*(last+1)))) return (errno != 0) ; if (pgstricmp(last, "alpha") == 0) return (errno != 0) ; if (pgstricmp(last, "beta") == 0) return (errno != 0); I also think "Ivalid" should probably be "Invalid" at line 254. Once that was fixed there were a couple problems in the test code - all the following refer to PgSQLTestCases.py On line 801, I get a failure to find the right version, so ran the query select count(*) from pg_class where relname like 'pg_%%' ; this returned 137 so I added '7.4':137 to the table there. In "CheckExecuteWithSingleton" I did the select there : select * from pg_database where datname = 'template1' and got 11 colums so added a test around line 620 : elif self.vstr.startswith("7.4"): flen = 11 There was another version connected error : ====================================================================== FAIL: CheckPgVer (__main__.PgSQLTestCases) ---------------------------------------------------------------------- Traceback (most recent call last): File "PgSQLTestCases.py", line 589, in CheckPgVer self.fail(msg) File "/usr/lib/python2.2/unittest.py", line 254, in fail raise self.failureException, msg AssertionError: SELECT version() says 7.4beta3, cnx.version says 7.4 -------------------------------------------------------------------- in psql : select version() ; returns : PostgreSQL 7.4beta3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) I wasn't sure quite how to fix that so set CheckPGVer to succeed always. Then the tests all seemed to work. ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-10-06 14:44 Message: Logged In: YES user_id=8500 Thank you for the bug reports (and especially) the fixes. Fixes for all but one of these bugs have been commited into the CVS repository. ' The CheckPgVer issue will not be fixed, since it only occurs with alpha, beta, or release canidate versions. It will work when 7.4 final is released. ---------------------------------------------------------------------- Comment By: Billy G. Allie (ballie01) Date: 2003-10-06 14:43 Message: Logged In: YES user_id=8500 Thank you for the bug reports (and especially) the fixes. Fixes for all but one of these bugs have been commited into the CVS repository. ' The CheckPgVer issue will not be fixed, since it only occurs with alpha, beta, or release canidate versions. It will work when 7.4 final is released. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-10-06 18:43:25
|
Bugs item #816729, was opened at 2003-10-02 14:16 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 Category: None Group: None >Status: Pending >Resolution: Fixed Priority: 5 Submitted By: jeff putnam (jefu) Assigned to: Billy G. Allie (ballie01) Summary: 7.4beta Initial Comment: I'm trying to use this with postgresql 7.4beta3 and ran into several version related bugs (none seemed to really stem from any sql problems, that is). First I found the same bug as bug number 786712. My fix was a bit different - I changed pgversion.c around line 183 to : /* Allow for alpha and beta versions */ if (((*last == 'a') || (*last == 'b')) && (isdigit(*(last+1)))) return (errno != 0) ; if (pgstricmp(last, "alpha") == 0) return (errno != 0) ; if (pgstricmp(last, "beta") == 0) return (errno != 0); I also think "Ivalid" should probably be "Invalid" at line 254. Once that was fixed there were a couple problems in the test code - all the following refer to PgSQLTestCases.py On line 801, I get a failure to find the right version, so ran the query select count(*) from pg_class where relname like 'pg_%%' ; this returned 137 so I added '7.4':137 to the table there. In "CheckExecuteWithSingleton" I did the select there : select * from pg_database where datname = 'template1' and got 11 colums so added a test around line 620 : elif self.vstr.startswith("7.4"): flen = 11 There was another version connected error : ====================================================================== FAIL: CheckPgVer (__main__.PgSQLTestCases) ---------------------------------------------------------------------- Traceback (most recent call last): File "PgSQLTestCases.py", line 589, in CheckPgVer self.fail(msg) File "/usr/lib/python2.2/unittest.py", line 254, in fail raise self.failureException, msg AssertionError: SELECT version() says 7.4beta3, cnx.version says 7.4 -------------------------------------------------------------------- in psql : select version() ; returns : PostgreSQL 7.4beta3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) I wasn't sure quite how to fix that so set CheckPGVer to succeed always. Then the tests all seemed to work. ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-10-06 14:43 Message: Logged In: YES user_id=8500 Thank you for the bug reports (and especially) the fixes. Fixes for all but one of these bugs have been commited into the CVS repository. ' The CheckPgVer issue will not be fixed, since it only occurs with alpha, beta, or release canidate versions. It will work when 7.4 final is released. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-10-06 18:38:37
|
Bugs item #786712, was opened at 2003-08-11 10:49 Message generated for change (Comment added) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=786712&group_id=16528 Category: libpq Group: None >Status: Pending >Resolution: Fixed Priority: 5 Submitted By: gerhard quell (gquell) Assigned to: Billy G. Allie (ballie01) Summary: use postgres 7.4beta1 Initial Comment: There is a problem if you´re using pypgsql with an new postgresql database like 7.4beta1. You have errormessages like: >db = PgSQL.connect("skesrv00:5500:postgres:postgres") >Exception >Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2211, in connect > return Connection(connInfo, client_encoding, unicode_results) > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2331, in __init__ > print PQconnectdb(connInfo) >ValueError: Ivalid format for PgVersion construction. In this case you must add a simple workaround to the file pgversion.c (near line 320): token = pg_strtok_r((char *)NULL, ".", &save_ptr); /* <workaround> */ /* simple workaround for a token like "4beta1" */ if (!isdigit(token[1])) token[1]=0x0; /* <\workaround> */ if ((token != (char *)NULL) && (*token != '\0') && Then setup.py build setup.py install ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-10-06 14:38 Message: Logged In: YES user_id=8500 A fix for this bug has been commited into the CVS repository. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=786712&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-10-06 18:01:47
|
Bugs item #786712, was opened at 2003-08-11 10:49 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=786712&group_id=16528 Category: libpq Group: None Status: Open Resolution: None Priority: 5 Submitted By: gerhard quell (gquell) >Assigned to: Billy G. Allie (ballie01) Summary: use postgres 7.4beta1 Initial Comment: There is a problem if you´re using pypgsql with an new postgresql database like 7.4beta1. You have errormessages like: >db = PgSQL.connect("skesrv00:5500:postgres:postgres") >Exception >Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2211, in connect > return Connection(connInfo, client_encoding, unicode_results) > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2331, in __init__ > print PQconnectdb(connInfo) >ValueError: Ivalid format for PgVersion construction. In this case you must add a simple workaround to the file pgversion.c (near line 320): token = pg_strtok_r((char *)NULL, ".", &save_ptr); /* <workaround> */ /* simple workaround for a token like "4beta1" */ if (!isdigit(token[1])) token[1]=0x0; /* <\workaround> */ if ((token != (char *)NULL) && (*token != '\0') && Then setup.py build setup.py install ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=786712&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-10-06 18:00:56
|
Bugs item #816729, was opened at 2003-10-02 14:16 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: jeff putnam (jefu) >Assigned to: Billy G. Allie (ballie01) Summary: 7.4beta Initial Comment: I'm trying to use this with postgresql 7.4beta3 and ran into several version related bugs (none seemed to really stem from any sql problems, that is). First I found the same bug as bug number 786712. My fix was a bit different - I changed pgversion.c around line 183 to : /* Allow for alpha and beta versions */ if (((*last == 'a') || (*last == 'b')) && (isdigit(*(last+1)))) return (errno != 0) ; if (pgstricmp(last, "alpha") == 0) return (errno != 0) ; if (pgstricmp(last, "beta") == 0) return (errno != 0); I also think "Ivalid" should probably be "Invalid" at line 254. Once that was fixed there were a couple problems in the test code - all the following refer to PgSQLTestCases.py On line 801, I get a failure to find the right version, so ran the query select count(*) from pg_class where relname like 'pg_%%' ; this returned 137 so I added '7.4':137 to the table there. In "CheckExecuteWithSingleton" I did the select there : select * from pg_database where datname = 'template1' and got 11 colums so added a test around line 620 : elif self.vstr.startswith("7.4"): flen = 11 There was another version connected error : ====================================================================== FAIL: CheckPgVer (__main__.PgSQLTestCases) ---------------------------------------------------------------------- Traceback (most recent call last): File "PgSQLTestCases.py", line 589, in CheckPgVer self.fail(msg) File "/usr/lib/python2.2/unittest.py", line 254, in fail raise self.failureException, msg AssertionError: SELECT version() says 7.4beta3, cnx.version says 7.4 -------------------------------------------------------------------- in psql : select version() ; returns : PostgreSQL 7.4beta3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) I wasn't sure quite how to fix that so set CheckPGVer to succeed always. Then the tests all seemed to work. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-10-02 18:16:28
|
Bugs item #816729, was opened at 2003-10-02 11:16 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: jeff putnam (jefu) Assigned to: Nobody/Anonymous (nobody) Summary: 7.4beta Initial Comment: I'm trying to use this with postgresql 7.4beta3 and ran into several version related bugs (none seemed to really stem from any sql problems, that is). First I found the same bug as bug number 786712. My fix was a bit different - I changed pgversion.c around line 183 to : /* Allow for alpha and beta versions */ if (((*last == 'a') || (*last == 'b')) && (isdigit(*(last+1)))) return (errno != 0) ; if (pgstricmp(last, "alpha") == 0) return (errno != 0) ; if (pgstricmp(last, "beta") == 0) return (errno != 0); I also think "Ivalid" should probably be "Invalid" at line 254. Once that was fixed there were a couple problems in the test code - all the following refer to PgSQLTestCases.py On line 801, I get a failure to find the right version, so ran the query select count(*) from pg_class where relname like 'pg_%%' ; this returned 137 so I added '7.4':137 to the table there. In "CheckExecuteWithSingleton" I did the select there : select * from pg_database where datname = 'template1' and got 11 colums so added a test around line 620 : elif self.vstr.startswith("7.4"): flen = 11 There was another version connected error : ====================================================================== FAIL: CheckPgVer (__main__.PgSQLTestCases) ---------------------------------------------------------------------- Traceback (most recent call last): File "PgSQLTestCases.py", line 589, in CheckPgVer self.fail(msg) File "/usr/lib/python2.2/unittest.py", line 254, in fail raise self.failureException, msg AssertionError: SELECT version() says 7.4beta3, cnx.version says 7.4 -------------------------------------------------------------------- in psql : select version() ; returns : PostgreSQL 7.4beta3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) I wasn't sure quite how to fix that so set CheckPGVer to succeed always. Then the tests all seemed to work. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=816729&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-09-19 20:01:38
|
Bugs item #809127, was opened at 2003-09-19 11:30 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=809127&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Sergey Suleymanov (solt) Assigned to: Nobody/Anonymous (nobody) Summary: DateTime parsing Initial Comment: There is a problem with parsing German datestyle by the DateTime.ISO.ParseAny: ValueError: unsupported format: "19.09.2003" Maybe DateTime.Parser.DateTimeFromString is better? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=809127&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-09-05 00:17:52
|
Bugs item #800801, was opened at 2003-09-04 20:17 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=800801&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Mike C. Fletcher (mcfletch) Assigned to: Nobody/Anonymous (nobody) Summary: Precision lost from mxDateTime during queries Initial Comment: When working with PostgreSQL timestamps, round-trips to/from Python wind up losing data (the least-significant digits of the timestamp, 1000ths of seconds and less). Result is that timestamps which include 1000ths of seconds cannot be queried from Python using mxDateTime values. PyPgSQL should likely be using code something like the following to format an mxDateTime value for a timestamp: new.Format( '%Y-%m-%d %H:%M:%%s' )%( new.second, ) or new.Format( '%Y-%m-%d %H:%M:%%r' )%( new.second, ) (difference being use of 'r' format, which will do as much as possible to be sure that the resulting string is capable of exactly producing the original format). To illustrate: from pytable import dbspecifier, sqlquery spec = dbspecifier.DBSpecifier( dbdriver='PyPgSQL', database='cinemon', host='localhost', user='cinemon', password='xxxx' ) driver, connection = spec.connect() ((new,),) = sqlquery.SQLQuery( sql = """SELECT MAX( archive_ts ) from log_modem;""" )( connection ) print 'new', repr(new) print 'seconds', new.second records = sqlquery.SQLQuery( sql = """SELECT * from log_modem where archive_ts = %%(new)s;""" )( connection, new=new ).fetchall() print records # is an empty list because of truncation... V:\cinemon>err_date_truncation.py new <DateTime object for '2003-09-04 19:50:16.30' at 93a820> seconds 16.297984 [] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=800801&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-08-11 14:57:33
|
Bugs item #786712, was opened at 2003-08-11 14:49 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=786712&group_id=16528 Category: libpq Group: None Status: Open Resolution: None Priority: 5 Submitted By: gerhard quell (gquell) Assigned to: Nobody/Anonymous (nobody) Summary: use postgres 7.4beta1 Initial Comment: There is a problem if you´re using pypgsql with an new postgresql database like 7.4beta1. You have errormessages like: >db = PgSQL.connect("skesrv00:5500:postgres:postgres") >Exception >Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2211, in connect > return Connection(connInfo, client_encoding, unicode_results) > File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2331, in __init__ > print PQconnectdb(connInfo) >ValueError: Ivalid format for PgVersion construction. In this case you must add a simple workaround to the file pgversion.c (near line 320): token = pg_strtok_r((char *)NULL, ".", &save_ptr); /* <workaround> */ /* simple workaround for a token like "4beta1" */ if (!isdigit(token[1])) token[1]=0x0; /* <\workaround> */ if ((token != (char *)NULL) && (*token != '\0') && Then setup.py build setup.py install ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=786712&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-07-18 08:41:27
|
Patches item #773489, was opened at 2003-07-18 10:41 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=773489&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Thomas Langås (tlan) Assigned to: Nobody/Anonymous (nobody) Summary: Add support for 'infinity' in query-results Initial Comment: Today (tested with your 2.4-release, as well), it crashes with an exception when DATETIME-fields contains 'infinity'. On line 800 in PgSQL.py add: if value in ('infinity', '+infinity', '-infinity'): fake_infinity = '9999-12-13 23:59:59' # fake infinity if value[0] == '-': value = '-' + fake_infinity else: value = fake_infinity Maybe you want to add some sort of constant somewhere or something, so that one can do if date=<constant> to check for infinity, so that one can change the date/time at a later date,without breaking programs. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=773489&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-07-14 21:07:46
|
Bugs item #747525, was opened at 2003-06-02 09:35 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=747525&group_id=16528 Category: PgSQL Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Michael Owens (owensmk) Assigned to: Billy G. Allie (ballie01) Summary: Commit Invalidating Client-Side Cursors Initial Comment: This is a behavior issue. connection::commit() invalidates all cursors, even when noPostgresCursor = 1. Even though a transaction has completed, it seems uncessary to make data already stored in client-side cursors unreadable. As best I can tell, this behavior is not required by the specs either. I can understand why server-side cursors may have need of this, but it would be nice if client-side cursors could still be read. ---------------------------------------------------------------------- Comment By: Billy G. Allie (ballie01) Date: 2003-06-03 00:28 Message: Logged In: YES user_id=8500 This is mandated by the way PostgreSQL works. PostgreSQL does not have cursors in the DB-API 2.0 sense. At the libpq API level, you execute a query on the connection, and retrieve a result set containing all the data resulting from the query. PostgreSQL portals (i.e. what other DBs call cursors) are backend contructs that allow the front end to get data from the query in smaller units than just retrieving all the rows at once. In pyPgSQL, cursors are a conglomeration of a connection object (needed to execute the query and retrieve results) and a result object which has all the results from the query (if portals are in use, then the result object will not contain data until a FETCH SQL command is executed). Also note, that there can only be 1 active query on a connection at a time in PostgreSQL. Also, transaction are at the connection level, not the cursor level which means that when a transaction is rolled back or commited, then all cursors are reset to their inital state. The reason: portals only exist within a transaction, so when a transaction is ended, all portals are closed and/or invalidated, which means that the cursors using those portals are reset. This is done even if noPostgresCursor is set to 1, so that the symatics of the cursor object in pyPgSQL does not change depending on the setting of noPostgresCursor. If you need to keep the data returned by a query, then you should use the fetchall() method and save returned list of results. The PgResultSet objects in the list has the cursor.descriptor information available via the PgResultSet.desriptor() method, so that information is not "lost" when the cursor is closed or reused. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=747525&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-06-25 11:23:23
|
Bugs item #760412, was opened at 2003-06-25 11:23 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=760412&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Ian (ian888) Assigned to: Nobody/Anonymous (nobody) Summary: array.py regression test fails version 2.3, linux Initial Comment: ia...@fo... Using Postgress 7.3.3 and pypgsql version 2.3, I built pypgsql on SuSE 8.2 linux. All tests Passed except this one. This could be a test failure as opposed to be a real failure. Please investigate. text follows: carra:/usr/share/pypgsql/test/regression # python array.py ....E ====================================================================== ERROR: CheckForIntInsertMultiDim (__main__.ArrayTestCases) ---------------------------------------------------------------------- Traceback (most recent call last): File "array.py", line 181, in CheckForIntInsertMultiDim cursor.execute("insert into test(ia) values (%s)", intlist) File "/usr/lib/python2.2/site-packages/pyPgSQL/PgSQL.py", line 2956, in execute raise OperationalError, msg OperationalError: ERROR: pg_atoi: zero-length string ---------------------------------------------------------------------- Ran 5 tests in 7.217s FAILED (errors=1) carra:/usr/share/pypgsql/test/regression # ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=760412&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-06-04 03:20:34
|
Patches item #738712, was opened at 2003-05-16 06:02 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 Category: None Group: None >Status: Pending >Resolution: Fixed Priority: 7 Submitted By: Laurent Pinchart (peter_pan78) Assigned to: Billy G. Allie (ballie01) Summary: inTransaction not updated when committing a large object Initial Comment: If not transaction is in progress when creating a new large object, Connection.binary setups a new transaction, which sets the internal variable inTransaction to 1. The transaction is commited at the end of Connection.binary, but inTransaction is not set back to 0 as it should. ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-06-03 23:20 Message: Logged In: YES user_id=8500 As I recieved no comments one way or the other, I implement the change I described earlier. ---------------------------------------------------------------------- Comment By: Billy G. Allie (ballie01) Date: 2003-05-19 13:43 Message: Logged In: YES user_id=8500 After some thought, I now believe that the transaction should not be committed after the large object is created. This would push the responsibility of commiting the transactiont to the application programmer. The sequence would usally be: 1. (optionally) perform some queries or updates. 1. create the large object. 2 store the new large object in a table 3 (optionally) perform other queries and updates. 4 commit (or rollback) the transaction. This change will allow the creation of the large object to be rolled back if a problem occurs after it's creation. Currently, the large object is commited before it is returned to the application, which, IMHO, is not the way it should be. Commets anyone? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-06-04 03:18:27
|
Patches item #708013, was opened at 2003-03-22 08:52 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=708013&group_id=16528 Category: None Group: None >Status: Pending >Resolution: Accepted Priority: 5 Submitted By: Christian Heimes (tiran) Assigned to: Gerhard Häring (ghaering) Summary: type of PG_TIMETZ is NoneType instead timetz Initial Comment: I'm working on a DA for Zope3 (see http://cvs.zope.org/zopeproducts/pypgsqlda/ and last posting). The database abstraction layer of Zope3 supports converter to zope3 types by the type of column. PG_TIMETZ has no corresponding typechar. There should be a line like this in libpqmodule.c: case PG_TIMETZ: desc = "timetz"; break; greetings Christian ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=708013&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-06-03 04:28:10
|
Bugs item #747525, was opened at 2003-06-02 09:35 Message generated for change (Comment added) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=747525&group_id=16528 Category: PgSQL Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Owens (owensmk) >Assigned to: Billy G. Allie (ballie01) Summary: Commit Invalidating Client-Side Cursors Initial Comment: This is a behavior issue. connection::commit() invalidates all cursors, even when noPostgresCursor = 1. Even though a transaction has completed, it seems uncessary to make data already stored in client-side cursors unreadable. As best I can tell, this behavior is not required by the specs either. I can understand why server-side cursors may have need of this, but it would be nice if client-side cursors could still be read. ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-06-03 00:28 Message: Logged In: YES user_id=8500 This is mandated by the way PostgreSQL works. PostgreSQL does not have cursors in the DB-API 2.0 sense. At the libpq API level, you execute a query on the connection, and retrieve a result set containing all the data resulting from the query. PostgreSQL portals (i.e. what other DBs call cursors) are backend contructs that allow the front end to get data from the query in smaller units than just retrieving all the rows at once. In pyPgSQL, cursors are a conglomeration of a connection object (needed to execute the query and retrieve results) and a result object which has all the results from the query (if portals are in use, then the result object will not contain data until a FETCH SQL command is executed). Also note, that there can only be 1 active query on a connection at a time in PostgreSQL. Also, transaction are at the connection level, not the cursor level which means that when a transaction is rolled back or commited, then all cursors are reset to their inital state. The reason: portals only exist within a transaction, so when a transaction is ended, all portals are closed and/or invalidated, which means that the cursors using those portals are reset. This is done even if noPostgresCursor is set to 1, so that the symatics of the cursor object in pyPgSQL does not change depending on the setting of noPostgresCursor. If you need to keep the data returned by a query, then you should use the fetchall() method and save returned list of results. The PgResultSet objects in the list has the cursor.descriptor information available via the PgResultSet.desriptor() method, so that information is not "lost" when the cursor is closed or reused. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=747525&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-06-02 13:35:44
|
Bugs item #747525, was opened at 2003-06-02 09:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=747525&group_id=16528 Category: PgSQL Group: None Status: Open Resolution: None Priority: 5 Submitted By: Michael Owens (owensmk) Assigned to: Nobody/Anonymous (nobody) Summary: Commit Invalidating Client-Side Cursors Initial Comment: This is a behavior issue. connection::commit() invalidates all cursors, even when noPostgresCursor = 1. Even though a transaction has completed, it seems uncessary to make data already stored in client-side cursors unreadable. As best I can tell, this behavior is not required by the specs either. I can understand why server-side cursors may have need of this, but it would be nice if client-side cursors could still be read. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=747525&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-05-29 17:44:45
|
Bugs item #745384, was opened at 2003-05-29 03:26 Message generated for change (Comment added) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=745384&group_id=16528 Category: PgConnection Group: None >Status: Pending >Resolution: Fixed >Priority: 7 Submitted By: Witek Wolejszo (witek_w_) >Assigned to: Billy G. Allie (ballie01) Summary: notifies return objects with random relname field Initial Comment: The relname field of pgnotify object is filled with random characters. Check comp.lang.python thread "postgresql notifications" started 28/05/2003 8:50 ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-05-29 13:44 Message: Logged In: YES user_id=8500 This problem has been fixed (in pgnotify.c) and commited to the CVS repository. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=745384&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-05-29 07:26:05
|
Bugs item #745384, was opened at 2003-05-29 09:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=745384&group_id=16528 Category: PgConnection Group: None Status: Open Resolution: None Priority: 5 Submitted By: Witek Wolejszo (witek_w_) Assigned to: Nobody/Anonymous (nobody) Summary: notifies return objects with random relname field Initial Comment: The relname field of pgnotify object is filled with random characters. Check comp.lang.python thread "postgresql notifications" started 28/05/2003 8:50 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=745384&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-05-19 17:44:14
|
Patches item #738712, was opened at 2003-05-16 06:02 Message generated for change (Settings changed) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 Category: None Group: None Status: Open Resolution: None >Priority: 7 Submitted By: Laurent Pinchart (peter_pan78) >Assigned to: Billy G. Allie (ballie01) Summary: inTransaction not updated when committing a large object Initial Comment: If not transaction is in progress when creating a new large object, Connection.binary setups a new transaction, which sets the internal variable inTransaction to 1. The transaction is commited at the end of Connection.binary, but inTransaction is not set back to 0 as it should. ---------------------------------------------------------------------- Comment By: Billy G. Allie (ballie01) Date: 2003-05-19 13:43 Message: Logged In: YES user_id=8500 After some thought, I now believe that the transaction should not be committed after the large object is created. This would push the responsibility of commiting the transactiont to the application programmer. The sequence would usally be: 1. (optionally) perform some queries or updates. 1. create the large object. 2 store the new large object in a table 3 (optionally) perform other queries and updates. 4 commit (or rollback) the transaction. This change will allow the creation of the large object to be rolled back if a problem occurs after it's creation. Currently, the large object is commited before it is returned to the application, which, IMHO, is not the way it should be. Commets anyone? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-05-19 17:43:42
|
Patches item #738712, was opened at 2003-05-16 06:02 Message generated for change (Comment added) made by ballie01 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Laurent Pinchart (peter_pan78) Assigned to: Nobody/Anonymous (nobody) Summary: inTransaction not updated when committing a large object Initial Comment: If not transaction is in progress when creating a new large object, Connection.binary setups a new transaction, which sets the internal variable inTransaction to 1. The transaction is commited at the end of Connection.binary, but inTransaction is not set back to 0 as it should. ---------------------------------------------------------------------- >Comment By: Billy G. Allie (ballie01) Date: 2003-05-19 13:43 Message: Logged In: YES user_id=8500 After some thought, I now believe that the transaction should not be committed after the large object is created. This would push the responsibility of commiting the transactiont to the application programmer. The sequence would usally be: 1. (optionally) perform some queries or updates. 1. create the large object. 2 store the new large object in a table 3 (optionally) perform other queries and updates. 4 commit (or rollback) the transaction. This change will allow the creation of the large object to be rolled back if a problem occurs after it's creation. Currently, the large object is commited before it is returned to the application, which, IMHO, is not the way it should be. Commets anyone? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-05-16 10:02:24
|
Patches item #738712, was opened at 2003-05-16 12:02 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Laurent Pinchart (peter_pan78) Assigned to: Nobody/Anonymous (nobody) Summary: inTransaction not updated when committing a large object Initial Comment: If not transaction is in progress when creating a new large object, Connection.binary setups a new transaction, which sets the internal variable inTransaction to 1. The transaction is commited at the end of Connection.binary, but inTransaction is not set back to 0 as it should. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=316528&aid=738712&group_id=16528 |
From: Michael B. <bo...@mi...> - 2003-04-06 19:46:52
|
Could 'setup.py' be changed so it doesn't have to be modified for use on a Mandrake Linux system? I've installed a couple versions of pyPgSQL (2.2, 2.3) on a couple of Mandrake systems (9.0, 9.1) and found it frustrating. If the include and library directories are wrong you get a gcc error message which is misleading. A fix to the problem would be much appreciated... a proposal is below. Thanks, Michael Proposed changes in 'setup.py': ------ SNIP import string SNIP if USE_CUSTOM: include_dirs = YOUR_LIST_HERE library_dirs = YOUR_LIST_HERE elif(string.find(sys.version, "Mandrake Linux")!=-1): include_dirs = ["/usr/include/pgsql"] library_dirs = ["/usr/lib/pgsql"] SNIP ------ NOTE: "sys.version" returns: '2.2.2 (#2, Feb 5 2003, 10:40:08) \n[GCC 3.2.1 (Mandrake Linux 9.1 3.2.1-5mdk)]' |
From: SourceForge.net <no...@so...> - 2003-03-26 10:19:35
|
Bugs item #708002, was opened at 2003-03-22 14:12 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=708002&group_id=16528 Category: PgSQL Group: None Status: Open Resolution: None Priority: 5 Submitted By: Christian Heimes (tiran) Assigned to: Billy G. Allie (ballie01) Summary: interval (DateTimeDelta) returns maximal days Initial Comment: I'm working on a pyPgSQLDA for Zope3 (see http://cvs.zope.org/zopeproducts/pypgsqlda/). I have noticed a problem mit time intervals. INSERT INTO testinterval VALUES('20 years'); INSERT INTO testinterval VALUES('20 days'); pyPgSQL returns 20.0 days for both statements, what is really strange becauce the interval type is 12bytes long and supports more than a billion years. ---------------------------------------------------------------------- >Comment By: Gerhard Häring (ghaering) Date: 2003-03-26 11:33 Message: Logged In: YES user_id=163326 Do we really want to invent our own INTERVAL type? What about inventing one like this: class BrokenInterval(str): pass This could be used for the broken (IMO) intervals. All others could then still be returned as DateTimeDelta. Just a thought, because I'm wary of making the TODO list grow much more ;-) ---------------------------------------------------------------------- Comment By: Billy G. Allie (ballie01) Date: 2003-03-26 08:56 Message: Logged In: YES user_id=8500 I belive the original problem of 20 days and 20 years returning the same value is fixed in the latest PgSQL.py from CVS: bga=# \d dtest Table "public.dtest" Column | Type | Modifiers --------+----------+----------- a | interval | bga=# select * from dtest; a ----------------- 20 years 20 days 7304 days 20:24 (3 rows) bga=# \q $ python Python 2.2.2 (#7, Nov 27 2002, 17:10:05) [C] on openunix8 Type "help", "copyright", "credits" or "license" for more information. >>> from pyPgSQL import PgSQL >>> cx = PgSQL.connect(password='**********') >>> cu = cx.cursor() >>> cu.execute('select * from dtest') >>> r = cu.fetchall() >>> for i in r: ... print i ... [<DateTimeDelta object for '7304:20:24:00.00' at 81fc470>] [<DateTimeDelta object for '20:00:00:00.00' at 81fa8b0>] [<DateTimeDelta object for '7304:20:24:00.00' at 81fc5b0>] >>> But PostgreSQL intervals seem wierd to me. For examples: An interval of 1 year is equal to an interval of 360 (not 365) days bga=# select '1 year'::interval = '360 days'::interval; ?column? ---------- t (1 row) but adding an interval of 1 year to a date does the right thing: bga=# select '7/11/1954'::date + '1 year'::interval; ?column? --------------------- 1955-07-11 00:00:00 (1 row) Yet subtracting the dates give a value of 365: bga=# select '7/11/1955'::date - '7/11/1954'::date; ?column? ---------- 365 (1 row) This is all very strange to me. Futher investigation give: 1 year = 360 days 1 month = 30 days 1 month = 4 weeks 2 days 1 week = 7 days which doesn't seem to bear much relationship to the actual number of days between 2 dates. It seems to me that the only way to accurately represent a PostgreSQL interval is to do what PostgreSQL does and remember the original user input, i.e. have an object with attributes for years, months, weeks, days, hours, minutes, seconds. Adding and subtracting the interval from a date would then involve adding/subtracting the parts to/from the date. your thoughts? ---------------------------------------------------------------------- Comment By: Christian Heimes (tiran) Date: 2003-03-23 22:12 Message: Logged In: YES user_id=560817 I'll see what I can do. But at first I have to learn for university Maybe you are interested in my ideas of mapping some other postgres specifiy types to python. I had first successes of converting inet, cidr and the geometric types to lists and tuples (see cvs.zope.org/zopeproducts/pypgsqlda). http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/68205 would be nice for the NULL value, too. The Null class supports postgres like behavior like NULL != False, NULL != '', NULL != 0 ... Christian ---------------------------------------------------------------------- Comment By: Gerhard Häring (ghaering) Date: 2003-03-23 21:55 Message: Logged In: YES user_id=163326 This sounds like the best option. If you could provide a patch, that'd be great :-) This also sounds like another change that'd justify a bugfix release. ---------------------------------------------------------------------- Comment By: Christian Heimes (tiran) Date: 2003-03-23 21:50 Message: Logged In: YES user_id=560817 I think the best way to get rid of this silly problems with date and time would be changing the orbit of earth and using the Napoleon Calender. Who wants to write a proposal how to do this? </joke> Why not supporting both ways so the user can chocie wether he wants to convert years to 365.24 days or get an exception? ---------------------------------------------------------------------- Comment By: Gerhard Häring (ghaering) Date: 2003-03-23 21:39 Message: Logged In: YES user_id=163326 Alternatively, we could look wether only days/hours/... are used in the string, then use the mxDateTime function to parse it. Otherwise, raise an exception. At least this wouldn't fail silently, returning a wrong value. Thoughts? ---------------------------------------------------------------------- Comment By: Christian Heimes (tiran) Date: 2003-03-23 20:26 Message: Logged In: YES user_id=560817 I studied my Practical Postgres book from o'reily and found an instresting information. A 'postgres year' are 365.24 days. My opinion is: * convert a year to 365.24 days * convert a moth to 365.24/12 days This is not an excat translation from year to days, but better then converting the string "20 years" to 20 days. I think it's the best workaround to keep backward compatibility to existing data. * put a HUGHE warning in the readme of pypgsql So programmers are informed about the problems that could accure using years and month statements in INTERVAL. ---------------------------------------------------------------------- Comment By: Gerhard Häring (ghaering) Date: 2003-03-23 19:57 Message: Logged In: YES user_id=163326 Assigning to Billy, because I just don't get it: This is WEIRD. It seems PostgreSQL remembers the years and months. I always thought it might convert everything into some base, say seconds internally. Looks like it doesn't. This might sound like a provocation, but here it goes: if PostgreSQL isn't smart enough to convert, say a year to 365 days, I don't think we should try to be so smart. A DateTimeDelta from mxDateTime is an exact timespan, not a remember-the-user-input thingie like PostgreSQL's INTERVAL seems to be. So it's not possible to do an exact conversion from INTERVAL to DateTimeDelta in all cases. Where it is possible, I think it currently works. My solution for the application programmer is to only use days/hours/seconds when inserting values into an INTERVAL column. I'd be glad to hear your opinion on this. ---------------------------------------------------------------------- Comment By: Christian Heimes (tiran) Date: 2003-03-22 16:53 Message: Logged In: YES user_id=560817 I did some more testing: $ python2.2 Python 2.2.2 (#1, Jan 18 2003, 10:18:59) [GCC 3.2.2 20030109 (Debian prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mx import DateTime >>> t = DateTime.DateTimeDelta(7300) >>> t <DateTimeDelta object for '7300:00:00:00.00' at 819a1c0> >>> t.days 7300.0 >>> test=> SELECT * FROM testint; inter ---------- 20 days 20 years 2 mons 20 years 14 days (5 rows) These are the values from the database for: 20 days, 20 years, 2 month, 2 decades, 2 weeks test=> SET DATESTYLE TO NONEUROPEAN, GERMAN; SET VARIABLE test=> SELECT * FROM testint; inter ------------ @ 20 days @ 20 years @ 2 mons @ 20 years @ 14 days (5 rows) mons and years are not supported by PgSQL.TypeCache.interval2datetimedelta Maybe you could convert them to the following values: 1 Month -> 30 days 3 Month -> 91 days (?) 6 Month -> 182 days (?) 12 Month -> 1 year 20 Month -> 1 year and 8 month 1 year -> 365 days 4 years -> 1461 days (4* 1 years + 1 day) (?) 400 years -> 146099 days (100 * 4 years - 1 day) (?) 2000 years -> 730496 (5* 400 years +1 day) (?) Christian ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=116528&aid=708002&group_id=16528 |
From: SourceForge.net <no...@so...> - 2003-03-26 10:09:29
|
Feature Requests item #708393, was opened at 2003-03-23 16:46 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=366528&aid=708393&group_id=16528 Category: None Group: None Status: Open Priority: 3 Submitted By: Christian Heimes (tiran) Assigned to: Billy G. Allie (ballie01) Summary: Moving to python2.2 new style classes and datetime Initial Comment: datetime: http://www.python.org/dev/doc/devel/lib/module-datetime.html new style classes: http://www.python.org/doc/2.2.1/whatsnew/whatsnew22.html Datetime will be a part of python2.3 and is used in zope3. new style classes would ease up a lot of code, for example: class PgInt8(long): pass Christian ---------------------------------------------------------------------- >Comment By: Gerhard Häring (ghaering) Date: 2003-03-26 11:23 Message: Logged In: YES user_id=163326 Not returning PgInt* from queries, but keeping them available to do range checking sounds like the ideal compromise to me. ---------------------------------------------------------------------- Comment By: Billy G. Allie (ballie01) Date: 2003-03-26 09:13 Message: Logged In: YES user_id=8500 As Gerhard stated, the reason for the PgInt* types is that the valid range for the various PostgreSQL integer types are fixed and less then what can be represented by the python types. It a question of where you want to catch the error -- in the front end or at the back end. I prefer to catch overflow errors in the front end. Also, a constraint failure specified on a field in a table (e.g. x must be between y and z) is, to me, a different class of error than trying to insert a value that is outside the range of the given type. That said, I agree that the current PgInt* types may be more trouble than they are worth, since simple subclassing of the type and adding a range check doesn't work in python 2.2. I think that the code could be change to return a Python int (or long for PgInt8). The PgInt* classes can remain and could be used to cast the ints before sending them to the backend if someone wanted to range check the values before they are sent to the PostgreSQL backend. Your thoughts? ---------------------------------------------------------------------- Comment By: Christian Heimes (tiran) Date: 2003-03-23 22:05 Message: Logged In: YES user_id=560817 For datetime: For now there is only a python prototype of the new upcoming Datetime module shipped with Zope3. I hope Datetime will be backported to Python-2.2. For numeric: Perhaps you should have a glance on pynum (numerical python) utilities. Users who needs exact numerical representation of numbers would have pynum installed, so you can use the pynum classes. If it's not installed then it seams so that the users don't need excat values. For int and float: For me I have to convert all special pyPgSQL types into ordenary types like int, float and long to use them in Zope3. This is due the security framework of Zope3. Every class, schema or type must have an interface which describes, what methods and attributes are public (very easy speaking, see Zope3 WikiWikiWeb for more informations). In my opinion this is a performance leak. ---------------------------------------------------------------------- Comment By: Gerhard Häring (ghaering) Date: 2003-03-23 21:11 Message: Logged In: YES user_id=163326 For datetime: if there's an easily installable (read: distutilified) Python package for Python 2.2, we might use the Python datetime stuff in pyPgSQL 3.x instead of mxDateTime. Yes, Python's int, long, float can handle all numbers that PostgreSQL can handle in the corresponding types. But not necessarily the other way round. For example, a PgInt2 has a much smaller range than a Python int. Ditto for PgInt8 and Python long. So if you insert a (large) long in a INT8 column, you can get an ERROR from the PostgreSQL side. That was why Billy implemented the PgInt* types, so he told me. I personally (currently) don't think they're worth the effort, as they only catch a (possibly small) range of errors. For example, there could still be additional CONSTRAINTs. We still need an equivalent of NUMERIC, though. And this one is the only really hard type to implement. ---------------------------------------------------------------------- Comment By: Christian Heimes (tiran) Date: 2003-03-23 20:40 Message: Logged In: YES user_id=560817 The new Datetime module for python2.3 is used under both python2.2 and python2.3 in the upcoming Zope3. It's tested and often used. By the way it would help me implementing a database adapter in Zope3. :) Do we really need range checks for numerical types? I thougt python's int, long and float could handle all numbers that could be handled by postgres (maybe except of numeric). ---------------------------------------------------------------------- Comment By: Gerhard Häring (ghaering) Date: 2003-03-23 19:30 Message: Logged In: YES user_id=163326 First, exploiting new-style classes and other Pyhton 2.3 features are what pyPgSQL 3.0 is all about. It will require Python 2.2, but I don't think we'll want to use the new datetime module from 2.3. Unfortunately, the pyPgSQL developers are generally short on time, that's why not that much has been accomplished on the 3.0 front, yet. At least from me ;-) If you're interested in other plans, you can look in the TODO file, which is in CVS. Also note that the current PgInt* classes do range checks. Just subclassing from long doesn't get us this feature. I'm still sceptical about the merit of the PgInt* types, but that's just my personal opinion and better discussed on the mailing list. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=366528&aid=708393&group_id=16528 |