with the pyPgSQL some values are not returned as basic Python types. For Postgres 'bigint' I get a PgInt8 type.
The problem: with these types, some type comparisons (created by PythonGenerator.py) and some assert statements fail because type(PgInt8Type)!=type(0L).
Looking around, I found no way to tell the DB-API 2.0 compliant PyPgSQL to return only native types.
These are two possible solutions:
1. The assert statements and type(value) comparison should check for not only one type, but a compatible type family. In my PostgresStorage I would override it and place the compatible types into the lists. (IntTypes, LongTypes, FloatTypes,DateTimeTypes, ObjRefTypes). The checks have to be changed in desing/PythonGenerator.py.
2. After retrieving rows from the database, all values are converted. So I need a function to convert values (file SQLObjectStore.py, function fetchObjectsOfClass) insert a hook (convertValues) after fetchall() but before further processing.
Any other suggestions?
Could Chuck as MiddleKit chief give his opinion on this?
> Hi all,
> with the pyPgSQL some values are not returned as basic Python types. For Postgres 'bigint' I get a PgInt8 type.
Its been a very long time, but I was the one who wrote this code. Here is
the discussion that I had with Chuck at the time:
At 11:57 PM 7/26/2001 -0700, Greg Brauer wrote:
From: Chuck Esterbrook <ChuckEsterbrook@...>
Date: Thu, 26 Jul 2001 23:33:40 -0400
Subject: Re: Think I've got it.
> At 07:54 PM 7/26/2001 -0700, Gregory Brauer wrote:
> >In otherwords, internally were always storing as Pg types. You had
> >mentioned that a PgInt8 would be better than a long, memory wise,
> >and had suggested using that as the internal storage mechanism.
> >To be consistent then, a boolean would be the same. Do you think
> >on the right track here?
> Basically and this obviously works. But then I'm wondering if this
> affect the portability of MK code. Like if I switch my database from
> Postgres to MySQL to something else, will this approach make it more
> for things to break?
> I'm leaning towards sticking with ordinary Python ints and longs for
Ok, here's another factor. An ObjRef is hard coded in SQLObjectStore
to be a long. So, I can't store references internally as an PGInt8,
which is how they come out of the database. So now I'm faced with
a) overriding a major part of SQLObjectStore in PgSQLObjectStore,
b) having ObjRefs be converted to Python native types, while Bools
and Longs are Pg types, or
c) converting all Pg Types to Python types across the board.
What are your thoughts?
I prefer (c) for bona fide Python values like bools and ints, but an obj ref
that is a "long" really isn't about Python, it's about internal MK
implementation. Looking at SQLObjectStore, it looks like the hard coding is
the assertion in fetchObjRef(). We could change that to:
assert type(value) in self.joinedObjRefTypes()
Then a subclass could override the method to return a tuple with more elements.
There are also the funcs objRefJoin() and objRefSplit(), but since they use
numerical operators, I don't think there should be interference with PgInt8.
So in summary, my thoughts are,
- convert all data (bool, int, long, float) to Pythonic data
- let "joined" obj ref values be as efficient as they can be, which
in this case would be a PgInt8
And I'm pretty sure I did what was recommended here, so that is why a PgInt8
remains so... because it is most typically used to describe an obj ref.
It's been 6 months since I looked at this code, so I appologize in advance
if I'm wrong about something here.