modeling-users Mailing List for Object-Relational Bridge for python (Page 30)
Status: Abandoned
Brought to you by:
sbigaret
You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(19) |
Feb
(55) |
Mar
(54) |
Apr
(48) |
May
(41) |
Jun
(40) |
Jul
(156) |
Aug
(56) |
Sep
(90) |
Oct
(14) |
Nov
(41) |
Dec
(32) |
| 2004 |
Jan
(6) |
Feb
(57) |
Mar
(38) |
Apr
(23) |
May
(3) |
Jun
(40) |
Jul
(39) |
Aug
(82) |
Sep
(31) |
Oct
(14) |
Nov
|
Dec
(9) |
| 2005 |
Jan
|
Feb
(4) |
Mar
(13) |
Apr
|
May
(5) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2006 |
Jan
(1) |
Feb
(1) |
Mar
(9) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(5) |
Aug
|
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Yannick G. <yan...@sa...> - 2003-07-16 19:52:08
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On July 16, 2003 03:35 pm, Sebastien Bigaret wrote:
> [...]
> The first traceback you gave was referring to db_error(), and it would
> be interesting to see what kind of errors this was, wouldn't it ?-)
Sure !
Couldn't evaluate expression SELECT t0.gl_id, t0.account, t0.control_acct,
t0.uom_id, t0.acct_type, t0.is_active, t0.gvt_code FROM GL t0 WHERE
(t0.is_active <> -255 AND t0.account LIKE '%=C3=A9=C3=A9%' AND t0.gvt_code =
LIKE
'%%'). Reason: exceptions.UnicodeError:ASCII encoding error: ordinal not in
range(128)
Traceback (most recent call last):
File "belugaerp/modules/gl/GLModule.py", line 302, in ?
mod.getGLAccountWithSpec( spec )
File "/home/ygingras/BelugaERP/belugaerp/modules/gl/GLAccountManager.py",
line 62, in getGLAccountWithSpec
recs =3D self.getRecsWithSpec(recSpec)
File "/home/ygingras/BelugaERP/belugaerp/modules/gl/I18NedManager.py", li=
ne
113, in getRecsWithSpec
recs =3D self._mainManager.getRecsWithSpec(recSpec)
File "/home/ygingras/BelugaERP/belugaerp/modules/gl/SimpleManager.py", li=
ne
104, in getRecsWithSpec
recs =3D self._mm.fetch(self._tableName, qual)
File "/home/ygingras/BelugaERP/belugaerp/modules/gl/ModelManager.py", line
55, in fetch
self.__ec.fetch(entName, qualifier, rawRow=3Draw) )
File "/usr/lib/python2.2/site-packages/Modeling/EditingContext.py", line
1304, in fetch
return self.objectsWithFetchSpecification(fs)
File "/usr/lib/python2.2/site-packages/Modeling/EditingContext.py", line
1218, in objectsWithFetchSpecification
objects=3Dself.parentObjectStore().objectsWithFetchSpecification(fs, ec)
File "/usr/lib/python2.2/site-packages/Modeling/ObjectStoreCoordinator.py=
",
line 420, in objectsWithFetchSpecification
return store.objectsWithFetchSpecification(aFetchSpecification,
anEditingContext)
File "/usr/lib/python2.2/site-packages/Modeling/DatabaseContext.py", line
1521, in objectsWithFetchSpecification
anEditingContext)
File "/usr/lib/python2.2/site-packages/Modeling/DatabaseChannel.py", line
381, in selectObjectsWithFetchSpecification
entity)
File
"/usr/lib/python2.2/site-packages/Modeling/DatabaseAdaptors/AbstractDBAPI2A=
daptorLayer/AbstractDBAPI2AdaptorChannel.py",
line 297, in selectAttributes
raise GeneralAdaptorException, msg
Modeling.Adaptor.GeneralAdaptorException
> > Indeed it does the job but as it was discussed some time ago on the
> > mailling list, it does not enable case insensitive match. A case
> > insentivice match with u"=E9=E9" encoded in utf-8 with look for "=C3=A9=
=C3=A9",
> > "=E3=A9=C3=A9", "=C3=A9=E3=A9" and "=E3=A9=E3=A9" wich does not make an=
y sens once put back in
> > unicode. "=E9=E9", "=C9=E9" and "=C9=C9" are respectivly "=C3=A9=C3=A9=
", "=C3=C3=A9" and "=C3=C3"
> > once encoded.
> >
> > So it may be wise to let the user make the utf-8 trick. That way he
> > won't blame you for the weird result of case insensitive match. On
> > the other hand, some databases like Postgresql detect encoding and
> > perform a descent case insitive match with utf-8 data.
>
> This needs investigation. If some of you could provide working python
> code with unicode and psycopg/pypgsql/pgdb/mysqldb/sqlitedb, please
> share. I've not time for this now.
I'll see what I can do but unicode support is not with MySQL 4.0, it's with
4.1 which is still alpha...
http://www.mysql.com/doc/en/Nutshell_4.1_features.html
> However, speaking of case-insensitive match: if postgresql supports it,
> then it should work, since the SQL WHERE clause behind is UPPER(...)
> LIKE UPPER(...) --pure theory and not tested, so if someone feels like
> testing it, go ahead :)
No Postgresql here to make tests but I'd like to here from people who
try this.
=2D --
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
(514) 276-5468
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/FazVrhy5Fqn/MRARAqAUAJ0SwSixlHmhhuOErxYXDuZysIIGzACeL1Of
2BNxeUHeD5Alby920C6agcY=3D
=3DP81O
=2D----END PGP SIGNATURE-----
|
|
From: Sebastien B. <sbi...@us...> - 2003-07-16 19:35:51
|
Yannick Gingras <yan...@sa...> wrote:
> > Okay, so Soif was right, database logging makes it fails (unsurprisingly
> > I must admit).
> >
> > Could you tell what happens if you disable MDL_ENABLE_DATABASE_LOGGING?
> > Does it fail and if yes, where? (traceback as well would be ok)
> > [given that you disable the encoding in your makeMatchQual]
>=20
> Sure !
>=20
[...]
> "/usr/lib/python2.2/site-packages/Modeling/DatabaseAdaptors/AbstractDBAPI=
2AdaptorLayer/AbstractDBAPI2AdaptorChannel.py",
> line 288, in selectAttributes
> db_info('Evaluating: %s'%statement)
> File "/usr/lib/python2.2/site-packages/Modeling/logging.py", line 56, in
> log_stderr
> sys.stderr.write('%s\n'%msg)
> UnicodeError: ASCII encoding error: ordinal not in range(128)
>=20
> It sounds pretty much the same to me...
Okay, sorry, didn't read it right. I suspect that you *enabled* it
there, and that it was disabled before.
Could you re-try this (without MDL_ENABLE_DATABASE_LOGGING) with:
------------------------------------------------------------------------
--- logging.py 20 Feb 2003 11:48:58 -0000 1.5
+++ logging.py 16 Jul 2003 19:28:49 -0000
@@ -53,6 +53,8 @@
import os, sys
=20
def log_stderr(msg):
+ if type(msg) is type(u''):
+ msg=3Dmsg.encode('utf-8')
sys.stderr.write('%s\n'%msg)
no_log=3Dlambda msg, severity=3D0: None
trace=3Ddebug=3Dinfo=3Dlog=3Dwarn=3Derror=3Dfatal=3Dno_log
------------------------------------------------------------------------
The first traceback you gave was referring to db_error(), and it would
be interesting to see what kind of errors this was, wouldn't it ?-)
> > Ok. Sorry I did not understand you right. So this is as simple as that,
> > just encode the attributes' value and that's it? I must try that on of
> > these days.
>=20
> Indeed it does the job but as it was discussed some time ago on the
> mailling list, it does not enable case insensitive match. A case
> insentivice match with u"=E9=E9" encoded in utf-8 with look for "=C3=A9=
=C3=A9",
> "=E3=A9=C3=A9", "=C3=A9=E3=A9" and "=E3=A9=E3=A9" wich does not make any =
sens once put back in
> unicode. "=E9=E9", "=C9=E9" and "=C9=C9" are respectivly "=C3=A9=C3=A9",=
"=C3=C3=A9" and "=C3=C3"
> once encoded.
>=20
> So it may be wise to let the user make the utf-8 trick. That way he
> won't blame you for the weird result of case insensitive match. On
> the other hand, some databases like Postgresql detect encoding and
> perform a descent case insitive match with utf-8 data.
This needs investigation. If some of you could provide working python
code with unicode and psycopg/pypgsql/pgdb/mysqldb/sqlitedb, please
share. I've not time for this now.
However, speaking of case-insensitive match: if postgresql supports it,
then it should work, since the SQL WHERE clause behind is UPPER(...)
LIKE UPPER(...) --pure theory and not tested, so if someone feels like
testing it, go ahead :)
-- S=E9bastien.
|
|
From: Yannick G. <yan...@sa...> - 2003-07-16 19:14:30
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> Okay, so Soif was right, database logging makes it fails (unsurprisingly
> I must admit).
>
> Could you tell what happens if you disable MDL_ENABLE_DATABASE_LOGGING?
> Does it fail and if yes, where? (traceback as well would be ok)
> [given that you disable the encoding in your makeMatchQual]
Sure !
[...]
File "/usr/lib/python2.2/site-packages/Modeling/EditingContext.py", line
1304, in fetch
return self.objectsWithFetchSpecification(fs)
File "/usr/lib/python2.2/site-packages/Modeling/EditingContext.py", line
1218, in objectsWithFetchSpecification
objects=3Dself.parentObjectStore().objectsWithFetchSpecification(fs, ec)
File "/usr/lib/python2.2/site-packages/Modeling/ObjectStoreCoordinator.py=
",
line 420, in objectsWithFetchSpecification
return store.objectsWithFetchSpecification(aFetchSpecification,
anEditingContext)
File "/usr/lib/python2.2/site-packages/Modeling/DatabaseContext.py", line
1521, in objectsWithFetchSpecification
anEditingContext)
File "/usr/lib/python2.2/site-packages/Modeling/DatabaseChannel.py", line
381, in selectObjectsWithFetchSpecification
entity)
File
"/usr/lib/python2.2/site-packages/Modeling/DatabaseAdaptors/AbstractDBAPI2A=
daptorLayer/AbstractDBAPI2AdaptorChannel.py",
line 288, in selectAttributes
db_info('Evaluating: %s'%statement)
File "/usr/lib/python2.2/site-packages/Modeling/logging.py", line 56, in
log_stderr
sys.stderr.write('%s\n'%msg)
UnicodeError: ASCII encoding error: ordinal not in range(128)
It sounds pretty much the same to me...
> > What I said earlier is that I manually utf-8 encode every thing the I
> > *store* in the DB. I thought that I might be safe with queries but
> > well, utf-8 encoding queries too is not that much work.
>
> Ok. Sorry I did not understand you right. So this is as simple as that,
> just encode the attributes' value and that's it? I must try that on of
> these days.
Indeed it does the job but as it was discussed some time ago on the
mailling list, it does not enable case insensitive match. A case
insentivice match with u"=E9=E9" encoded in utf-8 with look for "=C3=A9=C3=
=A9",
"=E3=A9=C3=A9", "=C3=A9=E3=A9" and "=E3=A9=E3=A9" wich does not make any se=
ns once put back in
unicode. "=E9=E9", "=C9=E9" and "=C9=C9" are respectivly "=C3=A9=C3=A9", "=
=C3=C3=A9" and "=C3=C3"
once encoded.
So it may be wise to let the user make the utf-8 trick. That way he
won't blame you for the weird result of case insensitive match. On
the other hand, some databases like Postgresql detect encoding and
perform a descent case insitive match with utf-8 data.
=2D --
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
(514) 276-5468
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/FaQJrhy5Fqn/MRARAtstAJ0ak6KfychPQ6IXORlB+coVpN2dPwCeIljY
ZFc9vy4sJZ8lIEOFVzWLvkE=3D
=3DwIdl
=2D----END PGP SIGNATURE-----
|
|
From: Sebastien B. <sbi...@us...> - 2003-07-16 18:29:49
|
Yannick Gingras <yan...@sa...> writes:
> Soif> Could you please give us a bigger traceback.
> Soif> I talk w/ Sebastian last week about this. In fact i got some
> Soif> trouble w/ mysql and unicode (not using modeling), and ask
> Soif> him what he did to cover this issue in modeling.
>=20
> sure !
>=20
[...]
> File
> "/usr/lib/python2.2/site-packages/Modeling/DatabaseAdaptors/AbstractDBAPI=
2AdaptorLayer/AbstractDBAPI2AdaptorChannel.py",
> line 295, in selectAttributes
> db_error(msg)
> File "/usr/lib/python2.2/site-packages/Modeling/logging.py", line 56, in
> log_stderr
> sys.stderr.write('%s\n'%msg)
> UnicodeError: ASCII encoding error: ordinal not in range(128)
Okay, so Soif was right, database logging makes it fails (unsurprisingly
I must admit).
Could you tell what happens if you disable MDL_ENABLE_DATABASE_LOGGING?
Does it fail and if yes, where? (traceback as well would be ok)
[given that you disable the encoding in your makeMatchQual]
> What I said earlier is that I manually utf-8 encode every thing the I
> *store* in the DB. I thought that I might be safe with queries but
> well, utf-8 encoding queries too is not that much work.
Ok. Sorry I did not understand you right. So this is as simple as that,
just encode the attributes' value and that's it? I must try that on of
these days.
-- S=E9bastien.
|
|
From: Yannick G. <yan...@sa...> - 2003-07-16 18:12:09
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On July 16, 2003 02:05 pm, Yannick Gingras wrote:
> What I said earlier is that I manually utf-8 encode every thing the I
> *store* in the DB. I thought that I might be safe with queries but
> well, utf-8 encoding queries too is not that much work.
Like this (since I make my qualifiers by hand, I have nice spot to
trap unicode requests) :
def makeMatchQual(self, key, matchType, matchPatern):
if type(matchPatern) =3D=3D type(u""):
matchPatern =3D matchPatern.encode("utf-8")
=20
if matchType =3D=3D LK:
return Qualifier.KeyValueQualifier(key ,
=20
Qualifier.QualifierOperatorLike,
"*%s*" % matchPatern)
[...]
=2D --=20
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
(514) 276-5468
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD4DBQE/FZV2rhy5Fqn/MRARAowZAJ9zT4CtWCHmsfXzYXM6COCzmMON3QCVFlej
r/fWHWx1Kedjga7JhkuvEw=3D=3D
=3DQJ9u
=2D----END PGP SIGNATURE-----
|
|
From: Yannick G. <yan...@sa...> - 2003-07-16 18:06:06
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On July 16, 2003 11:44 am, you wrote:
> > UnicodeError: ASCII encoding error: ordinal not in range(128)
> >
> > Argh !
>
> U get this cause you try to to something like this
> ' %s ' % my_unicode
It seems to be fine here :
>>> "%s" % u"=E9=E9"
u'\xe9\xe9'
it's the print (or write()) that may choke :
>>> print u"=E9=E9"
Traceback (most recent call last):
File "<stdin>", line 1, in ?
UnicodeError: ASCII encoding error: ordinal not in range(128)
> Could you please give us a bigger traceback.
> I talk w/ Sebastian last week about this. In fact i got some
> trouble w/ mysql and unicode (not using modeling), and ask
> him what he did to cover this issue in modeling.
sure !
File "/usr/lib/python2.2/site-packages/Modeling/EditingContext.py", line
1304, in fetch
return self.objectsWithFetchSpecification(fs)
File "/usr/lib/python2.2/site-packages/Modeling/EditingContext.py", line
1218, in objectsWithFetchSpecification
objects=3Dself.parentObjectStore().objectsWithFetchSpecification(fs, ec)
File "/usr/lib/python2.2/site-packages/Modeling/ObjectStoreCoordinator.py=
",
line 420, in objectsWithFetchSpecification
return store.objectsWithFetchSpecification(aFetchSpecification,
anEditingContext)
File "/usr/lib/python2.2/site-packages/Modeling/DatabaseContext.py", line
1521, in objectsWithFetchSpecification
anEditingContext)
File "/usr/lib/python2.2/site-packages/Modeling/DatabaseChannel.py", line
381, in selectObjectsWithFetchSpecification
entity)
File
"/usr/lib/python2.2/site-packages/Modeling/DatabaseAdaptors/AbstractDBAPI2A=
daptorLayer/AbstractDBAPI2AdaptorChannel.py",
line 295, in selectAttributes
db_error(msg)
File "/usr/lib/python2.2/site-packages/Modeling/logging.py", line 56, in
log_stderr
sys.stderr.write('%s\n'%msg)
UnicodeError: ASCII encoding error: ordinal not in range(128)
The unicode error is trigered by logging that tries to repport an error,
probably a unicode error...
> It's really seem that modeling doesn't take care about unicode.
> (Read : Seb hasn't done so much test about unicode )
>
>
> I haven't done so much test but i think that stuff like enabling
> LOG will generate a lot of Unicode traceback
>
> > there is "type(foo) in (type(''), type(u''))" or I could encode my
> > query or who knows what. Since some RDMS (aka MySQL) choke on
> > unicode, maybe it would be best to have every queries encoded in utf-8
> > but I prefer to have your opinion 1st.
>
> Another trick that might help you is that MySQL 4.0 support unicode
> in query. but the MySQLDB don't by default. In fact you can pass
> a special encoding charset at connection but you need to have
> a latest version of the package ( I have to re-build this from source
> on my debian since it isn't in the default unstable install)
What I said earlier is that I manually utf-8 encode every thing the I
*store* in the DB. I thought that I might be safe with queries but
well, utf-8 encoding queries too is not that much work.
=2D --
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
(514) 276-5468
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/FZP1rhy5Fqn/MRARAuAIAJ9auY2Kr0NgPBCma8l4UmVIU/ofhQCcCbdv
qB80k8Vua8izSSILdmgY3L0=3D
=3DUCaV
=2D----END PGP SIGNATURE-----
|
|
From: <so...@la...> - 2003-07-16 15:48:08
|
On Wed, Jul 16, 2003 at 11:08:30AM -0400, Yannick Gingras wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
> I stumbled upon some "if type(foo)==type(''):" in the code
> (grep -r -E "type\(''\)" .). This fail to match unicode that behave
> like as string but is not the same type :
>
> >>> type(u'') == type('')
> 0
>
> Is this some kind of obscure feature ?
>
> This is used in the Qualifier code and it seems likely to me that
> someone will eventually try to make a fetch with unicode. In fact I
> might just try that right now...
>
> UnicodeError: ASCII encoding error: ordinal not in range(128)
>
> Argh !
U get this cause you try to to something like this
' %s ' % my_unicode
Could you please give us a bigger traceback.
I talk w/ Sebastian last week about this. In fact i got some
trouble w/ mysql and unicode (not using modeling), and ask
him what he did to cover this issue in modeling.
It's really seem that modeling doesn't take care about unicode.
(Read : Seb hasn't done so much test about unicode )
I haven't done so much test but i think that stuff like enabling
LOG will generate a lot of Unicode traceback
> there is "type(foo) in (type(''), type(u''))" or I could encode my
> query or who knows what. Since some RDMS (aka MySQL) choke on
> unicode, maybe it would be best to have every queries encoded in utf-8
> but I prefer to have your opinion 1st.
Another trick that might help you is that MySQL 4.0 support unicode
in query. but the MySQLDB don't by default. In fact you can pass
a special encoding charset at connection but you need to have
a latest version of the package ( I have to re-build this from source
on my debian since it isn't in the default unstable install)
Bye Bye .
|
|
From: Sebastien B. <sbi...@us...> - 2003-07-16 15:39:48
|
Yannick Gingras <yan...@sa...> writes:
> I stumbled upon some "if type(foo)=3D=3Dtype(''):" in the code=20
> (grep -r -E "type\(''\)" .). This fail to match unicode that behave
> like as string but is not the same type :
>=20
> >>> type(u'') =3D=3D type('')
> 0
>=20
> Is this some kind of obscure feature ?
>=20
> This is used in the Qualifier code and it seems likely to me that
> someone will eventually try to make a fetch with unicode. In fact I
> might just try that right now...
>=20
> UnicodeError: ASCII encoding error: ordinal not in range(128)
>=20
> Argh !
Funny... There's a message from you in the archives (20 Apr 2003, thread
is named "Working with unicode", I remembered Mario also discussed this
there) suggesting that this was working... Did I misundertand what you
were saying?
In fact, this surprised me a lot since I've never made anything
particular to support unicode. Given that python unicode support is
not particularly wonderful (well, it wasn't when I looked at it 1 1/2
year ago: I had to dive in the code to find the encoders/decoders, the
documentation was almost inexistent, and to end with everything was
messed up in my mind), so I just didn't care --and never needed it to
be honest (except for xml models because we were putting latin1
characters in them at that time).
> there is "type(foo) in (type(''), type(u''))" or I could encode my
> query or who knows what. Since some RDMS (aka MySQL) choke on
> unicode, maybe it would be best to have every queries encoded in utf-8
> but I prefer to have your opinion 1st.
As you can see, my opinion is that I have no opinion :/ I don't even
know how the different database *and* the different python db-adaptors
behave, and I must admit that I do not really want to look at that.
You're right by saying such tests for strings should be made against
regular and unicode strings, but I suspect this is only the easiest
part of it. My opinion, though... I've been bitten by unicode too hard
to be really objective about it.
So if you feel like looking at these things and summarize them (either
by proposing a procedure for using unicode w/ the framework, or by
submitting patches), I'll be happy to collaborate to the best of my
knowledge --again, this would imply my knowledge on the framework
mainly, because my unicode background is something like... empty...
Others interested in this topic may react here too.
Regards,
-- S=E9bastien.
|
|
From: Yannick G. <yan...@sa...> - 2003-07-16 15:08:38
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I stumbled upon some "if type(foo)=3D=3Dtype(''):" in the code=20
(grep -r -E "type\(''\)" .). This fail to match unicode that behave
like as string but is not the same type :
>>> type(u'') =3D=3D type('')
0
Is this some kind of obscure feature ?
This is used in the Qualifier code and it seems likely to me that
someone will eventually try to make a fetch with unicode. In fact I
might just try that right now...
UnicodeError: ASCII encoding error: ordinal not in range(128)
Argh !
there is "type(foo) in (type(''), type(u''))" or I could encode my
query or who knows what. Since some RDMS (aka MySQL) choke on
unicode, maybe it would be best to have every queries encoded in utf-8
but I prefer to have your opinion 1st.
=2D --=20
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
(514) 276-5468
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/FWpurhy5Fqn/MRARAkmgAJ4n/Dvx9i/t560r20gUhYR5ZCO63wCgjEDN
Qf/VDsHVZu19gxfI3wq7Hgc=3D
=3DqKKk
=2D----END PGP SIGNATURE-----
|
|
From: Sebastien B. <sbi...@us...> - 2003-07-16 14:01:02
|
Yannick Gingras <yan...@sa...> wrote:
> For some reasons, I nerver experience such amazing performance boosts
Must be because the objects I used for benchmarking only had a PK and one
attribute!
Okay, back to the functionality: as it is made in the patch, fetchesRawRo=
ws
misses two important functionalities:
1. it must behave the way a normal fetch behaves. This means the inserted
objects must be present, while deleted objects shouldn't be returned.
2. It does not work at all for nested ECs.
I thought that those of you who are already using the patch should be
aware of this.
I'm currently working on both problems. Unittests are written, now I'm
on the code itself. When integrating this into the CVS, it will behave
as expected in both situations. I'll report then here.
-- S=E9bastien.
|
|
From: Yannick G. <ygi...@yg...> - 2003-07-16 12:54:26
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 16 July 2003 08:13, Sebastien Bigaret wrote: > Thanks a lot Yannick for this nice clarification. I'll probably > include your message as-is in the documentation until I find some time > to rewrite it No problem. And as I asked in private why LaTex instead of DocBook ? DocBook is a really simple mark up with implied support for unicode since it's XML. 30 different tags max in a regular document. KDE, Gnome, Python and Linux are all moving to DocBook. I would have put it the mark up myself but this LaTex thing is beyond my mere mortal capabilities. ; ) > (I already have quite a bunch of doc. to write, and you > now know what happens when I have too much doc. to write: I had > features, which implies even more doc. to write, but well, that's my > delaying tactics ;). Everyone needs a bit of entertainment !=20 : D > Little disgression: the very reason why I made the QualifierParser, > then documented it only, is historically bound to the somewhat > complex API for fetching objects; before we get the current > ec.fetch(), fetching implied qualifierWithQualifierFormat, > FetchSpecification and ec.objectsWithFetchSpecification(). People > were not finding it that handy/friendly, imagine their groans if I > had put on top of that step-by-step instructions to build > qualifiers! They would have hunt you down to poke your eyes ! Well, I would have done just that... ; ) > > So I hope this will help you all to get the best out of the framework. > > The best source of doc is not the doc, the doc may get a bit out of > > synch. Dive into those sources up to the unit tests. S=E9bastien wo= rks > > hard to get all of those working all the time so you're sure to find = a > > lot of working example there. > > Writing docs in very time-consuming, and writing *good* doc. is more > then that ;) Hopefully it's not that bad... > > About unittests: yes, I maintain then very strictly. Mostly every bug > gets its unittests before being fixed (even if I rarely expose them in > patches), new features get unittests before I start coding, so you can > definitely rely on them. No the doc is not that bad. This is just a reminder that code is meant to be read by both human and computers. Furthermore, a good programming language will be more expressive that english. How many words to explain "(%s)" % ", ".join(map(str, aList)) ? But it's still quite readable. When you can't find it in the doc, or when the doc is unclear, do a quick grep. It might be obvious in Python. The case of the unit tests is really special. The fact that those are your quality assurance kit means that they are guaranteed to work. There is in those tests an amazing amount of working example and we need just that : a picture. I think that the doc should mention them, they are more valuable than you think. Regards,=20 - --=20 Yannick Gingras Coder for OBB : Out Biconcave Beingness http://OpenBeatBox.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE/FNYlrhy5Fqn/MRARAgTBAJ9jc+uJnMBQoO7xUb5JoQilss6NEwCfUo8O dY3FlKFQwdqOSCAj7lvBHrg=3D =3DftmW -----END PGP SIGNATURE----- |
|
From: Sebastien B. <sbi...@us...> - 2003-07-16 12:13:37
|
Thanks a lot Yannick for this nice clarification. I'll probably
include your message as-is in the documentation until I find some time
to rewrite it (I already have quite a bunch of doc. to write, and you
now know what happens when I have too much doc. to write: I had
features, which implies even more doc. to write, but well, that's my
delaying tactics ;).
Little disgression: the very reason why I made the QualifierParser,
then documented it only, is historically bound to the somewhat
complex API for fetching objects; before we get the current
ec.fetch(), fetching implied qualifierWithQualifierFormat,
FetchSpecification and ec.objectsWithFetchSpecification(). People
were not finding it that handy/friendly, imagine their groans if I
had put on top of that step-by-step instructions to build
qualifiers!
> So I hope this will help you all to get the best out of the framework.
> The best source of doc is not the doc, the doc may get a bit out of
> synch. Dive into those sources up to the unit tests. S=E9bastien works
> hard to get all of those working all the time so you're sure to find a
> lot of working example there.
Writing docs in very time-consuming, and writing *good* doc. is more
then that ;) Hopefully it's not that bad...
About unittests: yes, I maintain then very strictly. Mostly every bug
gets its unittests before being fixed (even if I rarely expose them in
patches), new features get unittests before I start coding, so you can
definitely rely on them.
-- S=E9bastien.
|
|
From: Sebastien B. <sbi...@us...> - 2003-07-16 11:19:33
|
Matt Goodall <ma...@po...> writes: > >> * maintaining a hierarchy of objects stored as a nested set > >> > >Could you elaborate on that? > > > Nested sets are a reasonably common way of describing a hierarchy of obje= cts > using a relational model. Here's a link to one article about it, > http://www.intelligententerprise.com/001020/celko.shtml, but there are pl= enty > of others. >=20 > The nested set example in the article is somewhat simplified since everyt= hing > is in one table. It is quite reasonable to have the left and right node > information in one table (nested set), the object's "real" data in another > table (object) and a foreign key to connect them. In this configuration it > makes more sense to me to maintain the nested set table with direct SQL b= ut > let the ORM handle the object. Wow, I remember having applied this to trees when I was doing some discrete math., but didn't ever apply it to data structures. Nice, indeed, thanks for the link, now I have two more things, formerly separated, connected in my brain ;) Roaming around the web to read more about this, I've found this article you may find interseting as well:=20 http://www.dbazine.com/tropashko4.html Regards, -- S=E9bastien. |
|
From: Yannick G. <yan...@sa...> - 2003-07-15 19:30:41
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi everyone,=20
I'd like to share with you the joy of building qualifiers by hand.
The use we have for the framework here is really dependent on
performance. So we looked at those profiler logs and asked Seb why
spark was sucking so much juice. Promptly we found out that spark was
slow because it was flexible but it is possible to make Qualifier by
hand without passing by the ugly string to parse. Unfortunately all
this nice magic is undocumented right now : (
So it goes like this :
There is a bunch of operator out there, somewhere in Qualifier.py or
if you prefer Modeling.Qualifier. The operators are exactly the ones
you would expect to use the in the query string :
QualifierOperatorEqual for "=3D=3D", QualifierOperatorLessThan for "<",
QualifierOperatorLike for "like" and so on.
The main idea is to build some qualifiers that put those operators in
relation with your model. I used Qualifier.KeyValueQualifier but you
may want to try KeyComparisonQualifier too. that part is easy, you
create some qualifiers instances with a key, an operator and a value:
# match all the authors who's name start with "H"
qual1 =3D KeyValueQualifier( "name",=20
QualifierOperatorLike,
"H*" )
# match only young authors
qual2 =3D KeyValueQualifier( "age",
QualifierOperatorLessThan,
35 )
# match the authors who's name ends with "s"
qual3 =3D KeyValueQualifier( "age",
QualifierOperatorLike,
"*s" )
And now that I'm done with the qualifier, I simply need to put them togethe=
r :
# I don't like authors who's name ends with "s"
qual4 =3D NotQualifier(qual4)
# pack all of those in a single qualifier
allQuals =3D (qual1, qual2, qual4) # any sequence (list of tuple) will do
qual5 =3D AndQualifier(allQuals)
And finally do the fetch:
objs =3D ec.fetch("author", qual5)
Or the super fast:
dicts =3D ec.fetch("author", qual5, rawRow=3D1)
Nice isn't it ?
So how fast is it ? Twice as fast on my data, you millage may vary.
: )
So I hope this will help you all to get the best out of the framework.
The best source of doc is not the doc, the doc may get a bit out of
synch. Dive into those sources up to the unit tests. S=E9bastien works
hard to get all of those working all the time so you're sure to find a
lot of working example there.
Enjoy !
=2D --=20
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
(514) 276-5468
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/FFZerhy5Fqn/MRARArdcAJ0T83mw1ZHCqjEOo000kE9XUXdIxwCfRppm
wYlFWKyPGPMNy0uYEz8kpqk=3D
=3DkiXC
=2D----END PGP SIGNATURE-----
|
|
From: Yannick G. <yan...@sa...> - 2003-07-15 16:26:41
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Here is a third version of the patch for QualifierOperatorIn. This one handle longs in value list of the QualifierOperatorIn when using raw Qualifiers : return "(%s)" % ", ".join(map(str, aList)) instead of : return str(tuple(aList)) =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/FCs7rhy5Fqn/MRARAoc1AJ9hAGBrAzSf3N1VGj3QwhuvaJhnbgCghPNj sbJ1BWWpjL/WxgZfWcOpCCA=3D =3DOHD0 =2D----END PGP SIGNATURE----- |
|
From: Matt G. <ma...@po...> - 2003-07-15 14:53:23
|
Sebastien Bigaret wrote: >Matt Goodall <ma...@po...> wrote: > > >>Hmm, I don't actually have a use case at the moment ;-). In my somewhat >>limited experience with ORMs, they can often handle the majority of queries >>but there are times when it is easier or faster to directly access the >>database. Obviously, it's important for the direct access to be part of the >>ORM's transaction. >> >> > >Okay, so you meant both queries and updates. To summarize the current >situation: > > - raw sql *queries* are not officially supported, nor documented. > However, there is a test case in test_EditingContext_Global.py you'd > probably want to refer to: test_999_customSQLQuery(), at: > http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/modeling/ProjectModeling/Modeling/tests/test_EditingContext_Global.py?rev=1.27&content-type=text/vnd.viewcvs-markup > > You were right, the dbContext is one of the objects you need to > access --you'll notice that this is far from a user friendly API, > which would probably be EC.fetchSQL(), as already discussed earlier > here. > Ah, I had a feeling the DatabaseContext would be registered for the model somewhere, since that is how it is defined in the XML/Py file. I didn't think to look in the DatabaseContext module itself. And you're right, it's not user friendly ;-). EC.fetchSQL() would be a lot more pleasant. >>Here are a few use cases that might warrant direct database access: >> >> * batch updates >> >> > >My _personal_ point of view here is that, if you do not want to pay for >the extra payload of manipulating real objects, such batches should >probably be made w/ the appropriate raw python adaptor. But I might have >misunderstood your point. > In this case you are probably right. I would be unlikely to manipulate object via Modeling during a large batch update. >> * maintaining a hierarchy of objects stored as a nested set >> >> >Could you elaborate on that? > Nested sets are a reasonably common way of describing a hierarchy of objects using a relational model. Here's a link to one article about it, http://www.intelligententerprise.com/001020/celko.shtml, but there are plenty of others. The nested set example in the article is somewhat simplified since everything is in one table. It is quite reasonable to have the left and right node information in one table (nested set), the object's "real" data in another table (object) and a foreign key to connect them. In this configuration it makes more sense to me to maintain the nested set table with direct SQL but let the ORM handle the object. - Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenationinternet.com e: ma...@po... |
|
From: Sebastien B. <sbi...@us...> - 2003-07-15 13:52:38
|
Matt Goodall <ma...@po...> wrote:
> I admit I'm fairly new to Modeling so I don't know its full
> potential. However, I'm a little wary of using an ORM which blocks we
> when either it can't handle a particularly complex use case or a hand
> crafted query would improve a bottleneck in the database access
> layer. I guess I'm looking for an insurance policy ;-).
Fair enough!
> Hmm, I don't actually have a use case at the moment ;-). In my somewhat
> limited experience with ORMs, they can often handle the majority of queri=
es
> but there are times when it is easier or faster to directly access the
> database. Obviously, it's important for the direct access to be part of t=
he
> ORM's transaction.
Okay, so you meant both queries and updates. To summarize the current
situation:
- raw sql *queries* are not officially supported, nor documented.
However, there is a test case in test_EditingContext_Global.py you'd
probably want to refer to: test_999_customSQLQuery(), at:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/modeling/ProjectModeling=
/Modeling/tests/test_EditingContext_Global.py?rev=3D1.27&content-type=3Dtex=
t/vnd.viewcvs-markup
You were right, the dbContext is one of the objects you need to
access --you'll notice that this is far from a user friendly API,
which would probably be EC.fetchSQL(), as already discussed earlier
here.
- speaking of sql *updates* which should be bounded to the
EditingContext transaction (i.e. updates that should be atomically
done along with the changes that ec.saveChanges() does): this is not
supported for the moment being.
The reasons is: there is no way for now to bind a method to be
executed when an EC saves its changes. It's not *that* difficult,
however, and if there's a need for such a thing then I'll activate
the delegation stuff around EditingContext and then all that you'll
need to do is to code a given method in the EC's delegate.
(for the curious, the support for delegation is done in module
Modeling.delegation, whose documentation is here:
http://modeling.sf.net/API/Modeling-API/public/Modeling.delegation-module.h=
tml
Delegates are another TODO item for which there is no need [users'
request] for the moment being)
Now you must understand that updating directly db-rows with raw sql is
very risky and should be handled with great care: the ORM has no way
to know that some of the objects it already manages are being directly
changed in the db. If you don't care, you can face very strange
behaviour, such as objects' properties being overriden without notice,
or exceptions if you're using optimistic locking (not implemented
yet), etc.
Note that I don't say that's impossible, just because I know how
this could be done. Just a quick hint: we will need a fetch to be
able to 'refresh' the already fetched objects (another TODO item).
> Here are a few use cases that might warrant direct database access:
>=20
> * batch updates
My _personal_ point of view here is that, if you do not want to pay for
the extra payload of manipulating real objects, such batches should
probably be made w/ the appropriate raw python adaptor. But I might have
misunderstood your point.
> * maintaining a hierarchy of objects stored as a nested set
Could you elaborate on that?
> * reporting queries, which can often be highly complex joins
If the query interface is not efficient enough for what you need,
fetchSQL() will probably be handy there. Now go and lobby for this TODO
item ;) BTW this was discussed some weeks ago, look in the archives for
the thread named "Summary objects and Proxy objects".
Last, as you probably already noticed it, I usually make my best to
enhance the framework and implement particular features when there's a
strong user request; however most of the time I need use-cases because
it makes things a lot easier, and it also helps in delimiting the area
of the to-be-implemented feature (remember this is all done on my spare
time ;)
I hope this makes things clearer. If you need further informations and
clarifications, feel free to ask for more here.
Regards,
-- S=E9bastien.
|
|
From: Matt G. <ma...@po...> - 2003-07-15 12:14:39
|
Sebastien Bigaret wrote:
>Matt Goodall <ma...@po...> wrote:
>
>
>>I realise this is sometimes a contentious issue for O/R mapping but is
>>there any way to execute arbitrary SQL within a transaction managed by
>>and EditingContext?
>>
>>I've had a quick look through the API but cannot see any way of
>>getting hold of the underlying connection. It looks like I need to
>>obtain a DatabaseContext for my model but how do I do that?
>>
>>
>
>As far as I can read between the lines, you want to modify some data
>when an EC is saving changes, right?
>
Or to put it another way ... I want to run some arbitrary, manually
written SQL within the same transaction as any updates to objects
managed by Modeling.
> Could you be more explicit about what you'd like to do? There are
> numerous use-cases for such things, most of which have major drawbacks
> (and that's why it's considered contentious within ORMs, mainly). I'd
> prefer to discuss a particular use-case rather than discourse on
> generic cases that probably won't match your case ;)
>
>
Hmm, I don't actually have a use case at the moment ;-). In my somewhat
limited experience with ORMs, they can often handle the majority of
queries but there are times when it is easier or faster to directly
access the database. Obviously, it's important for the direct access to
be part of the ORM's transaction.
Here are a few use cases that might warrant direct database access:
* batch updates
* maintaining a hierarchy of objects stored as a nested set
* reporting queries, which can often be highly complex joins
I admit I'm fairly new to Modeling so I don't know its full potential.
However, I'm a little wary of using an ORM which blocks we when either
it can't handle a particularly complex use case or a hand crafted query
would improve a bottleneck in the database access layer. I guess I'm
looking for an insurance policy ;-).
Cheers, Matt
--
Matt Goodall, Pollenation Internet Ltd
w: http://www.pollenationinternet.com
e: ma...@po...
|
|
From: Sebastien B. <sbi...@us...> - 2003-07-15 11:33:10
|
Hi Matt, Matt Goodall <ma...@po...> wrote: > I realise this is sometimes a contentious issue for O/R mapping but is > there any way to execute arbitrary SQL within a transaction managed by > and EditingContext? >=20 > I've had a quick look through the API but cannot see any way of > getting hold of the underlying connection. It looks like I need to > obtain a DatabaseContext for my model but how do I do that? As far as I can read between the lines, you want to modify some data when an EC is saving changes, right? Could you be more explicit about what you'd like to do? There are numerous use-cases for such things, most of which have major drawbacks (and that's why it's considered contentious within ORMs, mainly). I'd prefer to discuss a particular use-case rather than discourse on generic cases that probably won't match your case ;) -- S=E9bastien. |
|
From: Matt G. <ma...@po...> - 2003-07-15 11:06:25
|
Hi, I realise this is sometimes a contentious issue for O/R mapping but is there any way to execute arbitrary SQL within a transaction managed by and EditingContext? I've had a quick look through the API but cannot see any way of getting hold of the underlying connection. It looks like I need to obtain a DatabaseContext for my model but how do I do that? Thanks, Matt -- Matt Goodall, Pollenation Internet Ltd w: http://www.pollenationinternet.com e: ma...@po... |
|
From: Sebastien B. <sbi...@us...> - 2003-07-14 21:33:07
|
Yannick Gingras <yan...@sa...> wrote:
> On July 14, 2003 03:40 pm, Sebastien Bigaret wrote:
> > With this you'll get only 2 fetches: one for the master (companies),
> > one for the slaves (related persons).
> >
> > Yannick, could you try this and report please?
>=20
> Looking at the patch it is unclear to me if I can mix it with the
> rawRow fetch one :
>=20
> dst_objs=3Dec.fetch(aRelationship.destinationEntityName(), q)
>=20
> Given the option I will rater take the rawRow fetch and manage the
> relation by hand but I know that my case is a bit special.
No, you need real objects to use batchFetchRelationship(), since its purpose
is to populate the to-many relationships with their related objects in a
single fetch.
However, you can either
* turn a raw row (dictionary) into a real object w/ something like:
>>> import AuthorBooks
>>> from Modeling.EditingContext import EditingContext
>>> ec=3DEditingContext()
>>> rows=3Dec.fetch('Writer', 'firstName=3D=3D"John"', rawRows=3D1)
>>> an_author=3Drows[0] # dictionary
>>> pprint.pprint(an_author)
{'FK_Writer_id': 2,
'age': 82,
'birthday': <DateTime object for '1921-06-29 04:56:34.00' at 84e33f0>,
'firstName': 'Frederic',
'id': 3,
'lastName': 'Dard'}
>>> author_pk_name=3D'id'=20
>>> from Modeling.GlobalID import KeyGlobalID
>>> gid_author=3DKeyGlobalID('Writer',
... {author_pk_name: an_author[author_pk_name]})
>>> real_author=3Dec.faultForGlobalID(gid_author, ec)
>>> print real_author.getFirstName(),real_author.getLastName(),real_aut=
hor
Frederic Dard <Writer.Writer instance at 0x84d829c>
Note: this is just a quick hint: when it is integrated in cvs, you'll then
just ask:
ec.faultForRawRow(an_author, 'Writer')
instead of all these things.
* or simply fetch the related objects (either real object or raw rows), s=
ay
Writer, by using a qualifier like=20
'books.FK_Writer_Id =3D=3D %i' % an_author['id']
(of course, this implies you use information from your model, such as t=
he
PK or FK names)
Okay, enough w/ this, time for the fireworks now ;)
-- S=E9bastien.
PS: BTW, the fetch API will probably be changed w/ 'rowRaws=3D' (instead of
'rawRow=3D', w/o an 's') --I meant it, yet another typo!
|
|
From: Yannick G. <yan...@sa...> - 2003-07-14 20:45:41
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 14, 2003 03:40 pm, Sebastien Bigaret wrote: > With this you'll get only 2 fetches: one for the master (companies), > one for the slaves (related persons). > > Yannick, could you try this and report please? Looking at the patch it is unclear to me if I can mix it with the rawRow fetch one : dst_objs=3Dec.fetch(aRelationship.destinationEntityName(), q) Given the option I will rater take the rawRow fetch and manage the relation by hand but I know that my case is a bit special. =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/ExZnrhy5Fqn/MRARAqdoAKCOXP4jr/k77cI/0n65hfoln6GQDACfbncw IJo0Ux7m8b5W9qfh/CTO1AM=3D =3Dnc6T =2D----END PGP SIGNATURE----- |
|
From: Yannick G. <yan...@sa...> - 2003-07-14 20:28:59
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 14, 2003 04:00 pm, Sebastien Bigaret wrote: > Impressive isn't it?-) > > Here is the full version (fetching 5000 objects): > > - normal fetch (full objects): 5.6s. > =20 > - raw rows (dictionaries): 0.40s > > -- S=E9bastien. =46or some reasons, I nerver experience such amazing performance boosts but a quick test here give shows that my fetch time drop to 1/3 of what it was. : D Seb, you're the man ! =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/ExKJrhy5Fqn/MRARAtviAJ9SNQgbh5Ba0t4K94iTY7KjaZ3BWACgkHEB 6hv4ZcAeqc1kLyrpyZTPaYM=3D =3De9W8 =2D----END PGP SIGNATURE----- |
|
From: Sebastien B. <sbi...@us...> - 2003-07-14 20:01:10
|
Yannick Gingras <yan...@sa...> wrote:
> On July 14, 2003 03:38 pm, Sebastien Bigaret wrote:
> > - normal fetch (full objects): 5.6s.
> >
> > - raw rows (dictionaries):
>=20
> Wow ! the time is null !
Impressive isn't it?-)
Here is the full version (fetching 5000 objects):
- normal fetch (full objects): 5.6s.
=20=20=20=20
- raw rows (dictionaries): 0.40s
-- S=E9bastien.
|
|
From: Yannick G. <yan...@sa...> - 2003-07-14 19:44:46
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 14, 2003 03:38 pm, Sebastien Bigaret wrote: > - normal fetch (full objects): 5.6s. > > - raw rows (dictionaries): Wow ! the time is null ! ; ) Thanks S=E9bastien, this one is REALLY appreciated here. : ) =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/Ewgsrhy5Fqn/MRARAujeAJ9dhm2gpFz0vV8bUfVFY386NH/Q6wCfe5XJ mwN7Imn+uKIYcwESOWXkN5U=3D =3DC2ty =2D----END PGP SIGNATURE----- |