datetime:
http://www.python.org/dev/doc/devel/lib/module-datetime.html
new style classes:
http://www.python.org/doc/2.2.1/whatsnew/whatsnew22.html
Datetime will be a part of python2.3 and is used in zope3.
new style classes would ease up a lot of code, for example:
class PgInt8(long):
pass
Christian
Logged In: YES
user_id=163326
First, exploiting new-style classes and other Pyhton 2.3
features are what pyPgSQL 3.0 is all about.
It will require Python 2.2, but I don't think we'll want to
use the new datetime module from 2.3.
Unfortunately, the pyPgSQL developers are generally short on
time, that's why not that much has been accomplished on the
3.0 front, yet. At least from me ;-)
If you're interested in other plans, you can look in the
TODO file, which is in CVS.
Also note that the current PgInt* classes do range checks.
Just subclassing from long doesn't get us this feature.
I'm still sceptical about the merit of the PgInt* types, but
that's just my personal opinion and better discussed on the
mailing list.
Logged In: YES
user_id=560817
The new Datetime module for python2.3 is used under both
python2.2 and python2.3 in the upcoming Zope3. It's tested
and often used. By the way it would help me implementing a
database adapter in Zope3. :)
Do we really need range checks for numerical types? I thougt
python's int, long and float could handle all numbers that
could be handled by postgres (maybe except of numeric).
Logged In: YES
user_id=163326
For datetime: if there's an easily installable (read:
distutilified) Python package for Python 2.2, we might use
the Python datetime stuff in pyPgSQL 3.x instead of mxDateTime.
Yes, Python's int, long, float can handle all numbers that
PostgreSQL can handle in the corresponding types. But not
necessarily the other way round. For example, a PgInt2 has a
much smaller range than a Python int. Ditto for PgInt8 and
Python long.
So if you insert a (large) long in a INT8 column, you can
get an ERROR from the PostgreSQL side. That was why Billy
implemented the PgInt* types, so he told me.
I personally (currently) don't think they're worth the
effort, as they only catch a (possibly small) range of
errors. For example, there could still be additional
CONSTRAINTs.
We still need an equivalent of NUMERIC, though. And this one
is the only really hard type to implement.
Logged In: YES
user_id=560817
For datetime: For now there is only a python prototype of
the new upcoming Datetime module shipped with Zope3. I hope
Datetime will be backported to Python-2.2.
For numeric: Perhaps you should have a glance on pynum
(numerical python) utilities. Users who needs exact
numerical representation of numbers would have pynum
installed, so you can use the pynum classes. If it's not
installed then it seams so that the users don't need excat
values.
For int and float: For me I have to convert all special
pyPgSQL types into ordenary types like int, float and long
to use them in Zope3. This is due the security framework of
Zope3. Every class, schema or type must have an interface
which describes, what methods and attributes are public
(very easy speaking, see Zope3 WikiWikiWeb for more
informations). In my opinion this is a performance leak.
Logged In: YES
user_id=8500
As Gerhard stated, the reason for the PgInt* types is that
the valid range for the various PostgreSQL integer types are
fixed and less then what can be represented by the python
types. It a question of where you want to catch the error
-- in the front end or at the back end. I prefer to catch
overflow errors in the front end. Also, a constraint
failure specified on a field in a table (e.g. x must be
between y and z) is, to me, a different class of error than
trying to insert a value that is outside the range of the
given type.
That said, I agree that the current PgInt* types may be more
trouble than they are worth, since simple subclassing of the
type and adding a range check doesn't work in python 2.2. I
think that the code could be change to return a Python int
(or long for PgInt8). The PgInt* classes can remain and
could be used to cast the ints before sending them to the
backend if someone wanted to range check the values before
they are sent to the PostgreSQL backend.
Your thoughts?
Logged In: YES
user_id=163326
Not returning PgInt* from queries, but keeping them
available to do range checking sounds like the ideal
compromise to me.