modeling-users Mailing List for Object-Relational Bridge for python (Page 31)
Status: Abandoned
Brought to you by:
sbigaret
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(19) |
Feb
(55) |
Mar
(54) |
Apr
(48) |
May
(41) |
Jun
(40) |
Jul
(156) |
Aug
(56) |
Sep
(90) |
Oct
(14) |
Nov
(41) |
Dec
(32) |
2004 |
Jan
(6) |
Feb
(57) |
Mar
(38) |
Apr
(23) |
May
(3) |
Jun
(40) |
Jul
(39) |
Aug
(82) |
Sep
(31) |
Oct
(14) |
Nov
|
Dec
(9) |
2005 |
Jan
|
Feb
(4) |
Mar
(13) |
Apr
|
May
(5) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2006 |
Jan
(1) |
Feb
(1) |
Mar
(9) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(5) |
Aug
|
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Sebastien B. <sbi...@us...> - 2003-07-14 19:40:49
|
Hi, Given the recent discussion we had w/ Yannick on to-many faults being triggered even when all te related objects were previously fetched, here is a first implementation of DatabaseContext.batchFetchRelationship() You'll find the patch at: https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D771009&group_= id=3D58935&atid=3D489337 While it's quite naive for the moment being, it is completely usable. Current limitations: it can only acts on to-many relationships, and these to-many rels should have an inverse to-one relationship. Usage (this is NOT the definitive API that will be exposed to users ;) ----- Say you have Company.toPersons <---->> Person.company 1. fetch the companies you want, 2. use this: >>> from Modeling.ModelSet import defaultModelSet >>> dbContext=3DdatabaseContext('Company', ec) >>> rel=3DdefaultModelSet().entityNamed('Company').relationshipNamed('perso= ns') >>> dbContext.batchFetchRelationship(rel, companies, ec) given that databaseContext() is: def databaseContext(entityName, ec): from Modeling.FetchSpecification import FetchSpecification fs=3DFetchSpecification(entityName) return ec.rootObjectStore().objectStoreForFetchSpecification(fs) With this you'll get only 2 fetches: one for the master (companies), one for the slaves (related persons). Yannick, could you try this and report please? -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-14 19:38:34
|
Hi all, I've submitted patch #77168: https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D771168&group_= id=3D58935&atid=3D489337 which enables the ability to fetch raw rows (i.e. you get dictionaries) instead of fully-initialized objects. I remember we already discussed this feature some time ago with some of you already. Usage ----- either use FetchSpecification.setFetchesRawRows(1) or directly: EditingContext.fetch(..., rawRows=3D1, ...) Quick benchs ------------ On 5000 db-raws, I get =20=20=20=20 - normal fetch (full objects): 5.6s. - raw rows (dictionaries):=20 Tested on postgresql and mysql. -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-14 13:23:02
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 11, 2003 04:25 pm, you wrote: > 1. Do these times refer to the 30s spent on the initial algorithm? Does > this mean half of the time is already saved by prefetching? (I'm not > suggesting this is sufficient, I'm just curious there) The "39.780 CPU seconds" refers to the time given by the python profiler. "User time (seconds): 15.18" is taken from /usr/bin/time with profiling disabled (hence the lower CPU time amount). In both case no pre-fetch was done. > 2. I'm surprised the benefit of MDL_PERMANENT_DB_CONNECTION is so > low. That's a very minor point however. Running with MDL_ENABLE_DATABASE_LOGGING show that DB connection IS re-used but the lack of improvement is utter mystery for me too. > 3. How can it be that qualifierWithQualifierFormat() is called *so many > times*?? I do not understand that either. I called it exactly 3 times, the rest is internal to the framework : 3 0.000 0.000 35.940 11.980 /home/ygingras/modules/ModelManager.py:45(fetch) > I guess it's time for you to forget about qualifiers as strings and to > learn how to built your own Qualifier instances by hand. It's not that > complicated, and you will then avoid all the time lost in parsing the > qualifier. Of course, only change the qualifiers that are highly > stressed by the fetch of to-many. > > Example: > > ec.fetch('Writer', 'books.id in %s'%pks) > > can be rewritten using: > > from Modeling import Qualifier > q=Qualifier.KeyValueQualifier('books.id', > Qualifier.QualifierOperatorIn, > pks) > ec.fetch('Writer', q) I thought you said key-value coding was evil ; ) > > Is there any hope or should I forget about the nice abstraction provided > > by Modeling and craft my own SQL ? > > *I* really think that there is a solution --at least one that works on > the paper, and that I need to test; believe me, I won't say this if I > hadn't the strong feeling that it can be improved. But as I already > said, I won't have time until tuesday (maybe on monday, but I can't > promise for now) to actually test it, so we'll only be sure how good > it is then, you'll understand that I can't obviously be absolutely > positive before. It will involve two massive queries and no additional > fetch afterwards, so we can expect a big improvement. > > I hope you can live with these results until then, and we'll see how > good this solution is then. > > Thanks a lot for the figures and the profile log. I'd be interested > in the same after you change your qualifiers as suggested above, so > we can compare. Thanks for your quick answers, I'll see what I can do and keep you in touch. For the sake of completeness, I use : - Python 2.2.2 - Modeling 0.9-pre-9 with the "IN" path - MySQL 3.23.56 - mysql-python 0.9.2 The benchmark was performed on a PIII 1 Ghz with 1 Gb of RAM - -- Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/Eq6xrhy5Fqn/MRARAiDIAKCLX55h8EEnpvB4vhk78877weex9ACcD0o4 Pca2A2dJOLXNKcKr0cotkvo= =lK8Q -----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-14 12:24:53
|
Yannick Gingras <yan...@sa...> wrote: [...] > The I have detailed profiler logs :=20 >=20 > Thu Jul 10 17:21:24 2003 /tmp/@25817.0 >=20 > 1557967 function calls (1008710 primitive calls) in 39.780 CPU=20 > seconds >=20 > Ordered by: internal time >=20 > ncalls tottime percall cumtime percall filename:lineno(function) > 546199/5876 15.900 0.000 15.900 0.003=20 > /usr/lib/python2.2/site-packages/Modeling/QualifierParser.py:112(__str__) [...] Sorry, my fault. This was due to trace() statements causing an additional overhead even when tracing was disabled (because of the formatting of strings '%s',s). The patch can be downloaded at: https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D770906&group_= id=3D58935&atid=3D489337 On my machine I can observe a gain factor up to 7... (it is already integrated into the cvs main trunk, but since anonymous cvs and webcvs have a one day delay I thought you'd like to get it asap). -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-11 20:25:42
|
Hi Yannick, Replying quite quickly here again. 1. Do these times refer to the 30s spent on the initial algorithm? Does this mean half of the time is already saved by prefetching? (I'm not suggesting this is sufficient, I'm just curious there) 2. I'm surprised the benefit of MDL_PERMANENT_DB_CONNECTION is so low. That's a very minor point however. 3. How can it be that qualifierWithQualifierFormat() is called *so many times*?? I do not understand that either. However: > A lot of time is spent parsing the qualifier and building the SQL > query. Could spark be an unexpected bottle neck ? The "IN" query > does not seems to scale really well for large value lists. _Too much_ time spent in parsing a qualifier, indeed. And yes, for highly stressed application heavily dependent on qualifiers strings, spark is a known bottleneck --it's all in python hence it's slowness. A chapter on performance tuning is still to be written, and that's definitely meat for it (that's one of the reasons why I'm asking for the figures). I guess it's time for you to forget about qualifiers as strings and to learn how to built your own Qualifier instances by hand. It's not that complicated, and you will then avoid all the time lost in parsing the qualifier. Of course, only change the qualifiers that are highly stressed by the fetch of to-many. Example:=20 ec.fetch('Writer', 'books.id in %s'%pks) can be rewritten using: from Modeling import Qualifier q=3DQualifier.KeyValueQualifier('books.id', Qualifier.QualifierOperatorIn, pks) ec.fetch('Writer', q) Please post if you need help in building your own qualifiers (this also needs to be documented). > Is there any hope or should I forget about the nice abstraction provided = by=20 > Modeling and craft my own SQL ? *I* really think that there is a solution --at least one that works on the paper, and that I need to test; believe me, I won't say this if I hadn't the strong feeling that it can be improved. But as I already said, I won't have time until tuesday (maybe on monday, but I can't promise for now) to actually test it, so we'll only be sure how good it is then, you'll understand that I can't obviously be absolutely positive before. It will involve two massive queries and no additional fetch afterwards, so we can expect a big improvement. I hope you can live with these results until then, and we'll see how good this solution is then. Thanks a lot for the figures and the profile log. I'd be interested in the same after you change your qualifiers as suggested above, so we can compare. Thanks for your patience, regards, -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-11 19:31:43
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 11, 2003 01:32 am, you wrote: > A quick note about this: I wrote it a bit too quickly yesterday. Even > w/ to-one rel., you'll still have to pay for the extra round-trip to the > > db when accessing the inverse to-many rel: > >>> ec=3DEditingContext() > >>> objs=3Dec.fetch('Book') # fetch all books > >>> gids=3D[o.globalID() for o in objs] > >>> pks=3D[gid.keyValues()['id'] for gid in gids] > >>> objs[0].getAuthor().isFault() > >>> 1 > >>> ec.fetch('Writer', 'books.id in %s'%pks) # all fetches at once > >>> objs[0].getAuthor().isFault() # no round-trip to the db > >>> 0 > >>> objs[0].getAuthor().getBooks() # additional round-trip to the db > > ...for the very same reason I explained in the previous post. I'm > thinking of a way to avoid this, but will probably no have the time > until tuesday to test and code it. > > In the meantime, as I said, I'm quite interesting in hearing from the > performance you observe w/ your db when prefetching the rels. and > setting MDL_PERMANENT_DB_CONNECTION to 1. Iterative fetch of the i18ns array wo/ MDL_PERMANENT_DB_CONNECTION : User time (seconds): 15.18 System time (seconds): 0.70 Percent of CPU this job got: 79% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:19.89 Iterative fetch of the i18ns array w/ MDL_PERMANENT_DB_CONNECTION : User time (seconds): 14.29 System time (seconds): 0.33 Percent of CPU this job got: 83% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:17.49 2 fetch, one for masters, one with "IN" for i18ns : User time (seconds): 13.99 System time (seconds): 0.17 Percent of CPU this job got: 87% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:16.17 : \ The very same query typed at the prompt with a SQL JOIN spit an amazing amount of data in my face before it print "1440 rows in set (0.02 sec)"... I understand that I have to pay a price for the abstraction= =20 provided by the framework but... well, it's it's beyond usability here... := ( The I have detailed profiler logs :=20 Thu Jul 10 17:21:24 2003 /tmp/@25817.0 1557967 function calls (1008710 primitive calls) in 39.780 CPU=20 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 546199/5876 15.900 0.000 15.900 0.003=20 /usr/lib/python2.2/site-packages/Modeling/QualifierParser.py:112(__str__) 39164 1.310 0.000 1.870 0.000=20 /usr/lib/python2.2/threading.py:99(release) 39164 1.160 0.000 1.870 0.000=20 /usr/lib/python2.2/threading.py:81(acquire) 2173 1.040 0.000 6.790 0.003=20 /usr/lib/python2.2/site-packages/Modeling/DatabaseContext.py:1228(initializ= eObject) 2176 0.770 0.000 17.470 0.008=20 /usr/lib/python2.2/site-packages/Modeling/DatabaseChannel.py:110(fetchObjec= t) 78328 0.680 0.000 0.680 0.000=20 /usr/lib/python2.2/threading.py:44(_note) 4362 0.660 0.000 1.160 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Model.py:119(entityNamed) 78328 0.590 0.000 0.590 0.000=20 /usr/lib/python2.2/threading.py:581(currentThread) 72588 0.530 0.000 0.530 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Entity.py:565(name) 2924 0.510 0.000 0.650 0.000=20 /home/ygingras/modules/spark.py:209(buildState) 2173 0.510 0.000 1.010 0.000=20 /usr/lib/python2.2/site-packages/NotificationFramework/NotificationCenter.p= y:249(addObserver) 12271 0.510 0.000 0.770 0.000=20 /usr/lib/python2.2/site-packages/Modeling/KeyValueCoding.py:149(takeStoredV= alueForKey) 10868 0.470 0.000 1.560 0.000=20 /usr/lib/python2.2/site-packages/Modeling/ClassDescription.py:76(classDescr= iptionForName) 24542 0.450 0.000 0.630 0.000=20 /usr/lib/python2.2/site-packages/Modeling/utils.py:50(capitalizeFirstLetter) 12271 0.440 0.000 0.810 0.000=20 /usr/lib/python2.2/site-packages/Modeling/KeyValueCoding.py:130(storedValue= =46orKey) 15216 0.400 0.000 1.110 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Database.py:381(lock) 2173 0.350 0.000 2.480 0.001=20 /usr/lib/python2.2/site-packages/Modeling/CustomObject.py:233(snapshot) 2176 0.350 0.000 0.800 0.000=20 /usr/lib/python2.2/site-packages/Modeling/DatabaseAdaptors/AbstractDBAPI2Ad= aptorLayer/AbstractDBAPI2AdaptorChannel.py:175(fetchRow) 4346 0.340 0.000 0.560 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Entity.py:348(classProperties_att= ributes) 2173 0.330 0.000 0.390 0.000=20 /usr/lib/python2.2/site-packages/Modeling/EntityClassDescription.py:452(cla= ssForEntity) 17384 0.330 0.000 0.710 0.000=20 /usr/lib/python2.2/site-packages/Modeling/CustomObject.py:121(classDescript= ion) 2173 0.320 0.000 1.130 0.001=20 /usr/lib/python2.2/site-packages/Modeling/EditingContext.py:182(addObjectFo= rGlobalID) 2173 0.310 0.000 0.410 0.000=20 /usr/lib/python2.2/site-packages/Modeling/EntityClassDescription.py:122(att= ributesKeys) 39114 0.280 0.000 0.280 0.000=20 /usr/lib/python2.2/site-packages/Modeling/GlobalID.py:185(__hash__) 2935/3 0.280 0.000 15.930 5.310=20 /home/ygingras/modules/spark.py:330(buildTree_r) 51287 0.270 0.000 0.270 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Attribute.py:261(name) 36828 0.250 0.000 0.250 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Attribute.py:234(isClassProperty) 12271 0.250 0.000 0.280 0.000=20 /usr/lib/python2.2/site-packages/Modeling/Entity.py:346(<lambda>) 4346 0.240 0.000 0.240 0.000=20 /usr/lib/python2.2/site-packages/Modeling/GlobalID.py:176(__str__) 3 0.240 0.080 0.410 0.137=20 /home/ygingras/modules/spark.py:66(tokenize) 12271 0.220 0.000 0.220 0.000=20 /usr/lib/python2.2/site-packages/Modeling/DatabaseObject.py:174(attributeNa= meForKey) 2173 0.210 0.000 9.060 0.004=20 /usr/lib/python2.2/site-packages/Modeling/ObjectStoreCoordinator.py:371(ini= tializeObject) [...] A lot of time is spent parsing the qualifier and building the SQL query. Could spark be an unexpected bottle neck ? The "IN" query does not seems to scale really well for large value lists. Is there any hope or should I forget about the nice abstraction provided by= =20 Modeling and craft my own SQL ? =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/DxCdrhy5Fqn/MRARAs4ZAJ9L+DASFZ2lNjt+rIcm81gbIH4JUgCfba0d 3ZS00rZII0AcXTbzee/IpLo=3D =3D6I8u =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-11 11:11:06
|
A quick note about this: I wrote it a bit too quickly yesterday. Even w/ to-one rel., you'll still have to pay for the extra round-trip to the db when accessing the inverse to-many rel: >>> ec=3DEditingContext() >>> objs=3Dec.fetch('Book') # fetch all books >>> gids=3D[o.globalID() for o in objs] >>> pks=3D[gid.keyValues()['id'] for gid in gids] >>> objs[0].getAuthor().isFault() >>> 1 >>> ec.fetch('Writer', 'books.id in %s'%pks) # all fetches at once >>> objs[0].getAuthor().isFault() # no round-trip to the db >>> 0 >>> objs[0].getAuthor().getBooks() # additional round-trip to the db ...for the very same reason I explained in the previous post. I'm thinking of a way to avoid this, but will probably no have the time until tuesday to test and code it.=20 In the meantime, as I said, I'm quite interesting in hearing from the performance you observe w/ your db when prefetching the rels. and setting MDL_PERMANENT_DB_CONNECTION to 1. -- S=E9bastien. Sebastien Bigaret <sbi...@us...> writes: > Hi, >=20 > Here are some hints; sorry, but I can't spend a lot of time on this > for now. BTW I will be quite busy until tuesday, probably mostly > offline. >=20 > > I have a particular record set that hold 700 or so rows. I use > > dynamic i18n from another table that we will call i18nTable for > > simplicity. There is only a few fields in i18nTable : title and > > description.=20=20 > >=20 > > The relation look like this :=20 > [snipped: toMany rel. 'i18ns'] > > So here I am trying to fetch my whole record set. The initial fetch > > is really fast but when I try to getI18ns() on each fetched object it > > result in a SQL SELECT. This mean that do 701 SELECTs where I > > intended to do only one... My whole fetch take 30 seconds to complete > > but the initial fetch (the master records with lots of columns) take > > barely a second. It would be really nice if I could group the fetch so > > that it resulted in a single SQL select (ok, maybe 2 : one for master > > records, one for all the i18ns). >=20 > I understand this, but this is not that simple. See below. >=20 > > I understand why you really want lazy initialization. It's would be > > even slower to fetch all the database when you fetch an object with > > many relations. But having it optional, something like a > > recursiveFecth() would be really nice. >=20 > I will think about it; but as you already noted it, this can result in > unwanted cascaded fetchs ultimately loading the whole db. >=20 > > So my question is : Is it possible to do so ? >=20 > Before adressing your particular pb., let's see of this can be done with > a to-one relationship. In fact, a simple combination of globalIDs and > the operator 'in' makes the trick: >=20 > [in Modeling/tests/testPackages] > >>> from Modeling.EditingContext import EditingContext > >>> import AuthorBooks > >>> ec=3DEditingContext() > >>> objs=3Dec.fetch('Book') # fetch all books > >>> gids=3D[o.globalID() for o in objs] > >>> pks=3D[gid.keyValues()['id'] for gid in gids] > >>> objs[0].getAuthor().isFault() > 1 > >>> ec.fetch('Writer', 'books.id in %s'%pks) # all fetches at once > >>> objs[0].getAuthor().isFault() # no round-trip to the db > 0 >=20 > Now consider the to-many relationship: >=20 > >>> ec=3DEditingContext() > >>> objs=3Dec.fetch('Writer') > >>> gids=3D[o.globalID() for o in objs] > >>> pks=3D[gid.keyValues()['id'] for gid in gids] > >>> objs[0].getBooks().isFault() > >>> 1 > >>> ec.fetch('Book', 'author.id in %s'%pks) # all fetches at once > >>> objs[0].getAuthor().isFault() # still a fault! > >>> 1 > >>> len(objs[0]) # trigger a select >=20 > What is the pb here? The framework has no way to know that all necessary > objects for the given relation have been already loaded, so it needs to > fetch, at least to get the ids. So in this case there's no way to avoid > the additional round-trip to the db. However, these additional fetches > avoid the initialization phase for these objects (it's already done), so > you can expect these fetches to be significantly faster. >=20 > As a conclusion, I'll say that: >=20 > - either fetch all the i18ns objects, then pre-fetch the inverse > to-one pointing to your record objects, >=20 > - or stay as-is, and live w/ the additional fetches (where the > related i18ns are initialized by a single fetch, but where you'll > still get a fetch for each faulted array). >=20 > If you could try both, I'd be very interested in knowing how long each > one takes with your data. Thanks in advance! >=20 > Oh, and BTW: in case you've not already done it, you'd probably want > to set MDL_PERMANENT_DB_CONNECTION to 1, so that you do not pay the > additional overhead of closing and reopening the db connection for > each fetch. >=20 > > BTW The trick with FixedPoint works really well, tanks for the hints ! >=20 > Glad to hear this, this will be then documented for the next release. >=20 > -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-10 22:01:44
|
Hi, Here are some hints; sorry, but I can't spend a lot of time on this for now. BTW I will be quite busy until tuesday, probably mostly offline. > I have a particular record set that hold 700 or so rows. I use > dynamic i18n from another table that we will call i18nTable for > simplicity. There is only a few fields in i18nTable : title and > description.=20=20 >=20 > The relation look like this :=20 [snipped: toMany rel. 'i18ns'] > So here I am trying to fetch my whole record set. The initial fetch > is really fast but when I try to getI18ns() on each fetched object it > result in a SQL SELECT. This mean that do 701 SELECTs where I > intended to do only one... My whole fetch take 30 seconds to complete > but the initial fetch (the master records with lots of columns) take > barely a second. It would be really nice if I could group the fetch so > that it resulted in a single SQL select (ok, maybe 2 : one for master > records, one for all the i18ns). I understand this, but this is not that simple. See below. > I understand why you really want lazy initialization. It's would be > even slower to fetch all the database when you fetch an object with > many relations. But having it optional, something like a > recursiveFecth() would be really nice. I will think about it; but as you already noted it, this can result in unwanted cascaded fetchs ultimately loading the whole db. > So my question is : Is it possible to do so ? Before adressing your particular pb., let's see of this can be done with a to-one relationship. In fact, a simple combination of globalIDs and the operator 'in' makes the trick: [in Modeling/tests/testPackages] >>> from Modeling.EditingContext import EditingContext >>> import AuthorBooks >>> ec=3DEditingContext() >>> objs=3Dec.fetch('Book') # fetch all books >>> gids=3D[o.globalID() for o in objs] >>> pks=3D[gid.keyValues()['id'] for gid in gids] >>> objs[0].getAuthor().isFault() 1 >>> ec.fetch('Writer', 'books.id in %s'%pks) # all fetches at once >>> objs[0].getAuthor().isFault() # no round-trip to the db 0 Now consider the to-many relationship: >>> ec=3DEditingContext() >>> objs=3Dec.fetch('Writer') >>> gids=3D[o.globalID() for o in objs] >>> pks=3D[gid.keyValues()['id'] for gid in gids] >>> objs[0].getBooks().isFault() >>> 1 >>> ec.fetch('Book', 'author.id in %s'%pks) # all fetches at once >>> objs[0].getAuthor().isFault() # still a fault! >>> 1 >>> len(objs[0]) # trigger a select What is the pb here? The framework has no way to know that all necessary objects for the given relation have been already loaded, so it needs to fetch, at least to get the ids. So in this case there's no way to avoid the additional round-trip to the db. However, these additional fetches avoid the initialization phase for these objects (it's already done), so you can expect these fetches to be significantly faster. As a conclusion, I'll say that: - either fetch all the i18ns objects, then pre-fetch the inverse to-one pointing to your record objects, - or stay as-is, and live w/ the additional fetches (where the related i18ns are initialized by a single fetch, but where you'll still get a fetch for each faulted array). If you could try both, I'd be very interested in knowing how long each one takes with your data. Thanks in advance! Oh, and BTW: in case you've not already done it, you'd probably want to set MDL_PERMANENT_DB_CONNECTION to 1, so that you do not pay the additional overhead of closing and reopening the db connection for each fetch. > BTW The trick with FixedPoint works really well, tanks for the hints ! Glad to hear this, this will be then documented for the next release. -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-10 19:44:30
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 It sould be noted that "%s" list(myVals) will a "L" behind every values mak= ing=20 the query invalid... =20 This ", ".join(map (lambda val:str(val), valList)) Will do the trick ! =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/Dbe8rhy5Fqn/MRARAktKAJ9ZLa/ycv1L0deCIwlqssY20DqpHACfaYwf 0u0De5w6Ey7XqwNpQ4qVUX8=3D =3D2L6O =2D----END PGP SIGNATURE----- |
From: Yannick G. <yan...@sa...> - 2003-07-10 18:22:50
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi,=20 I have a particular record set that hold 700 or so rows. I use dynamic i18n from another table that we will call i18nTable for simplicity. There is only a few fields in i18nTable : title and description. =20 The relation look like this :=20 <relation displayLabel=3D''=20 joinSemantic=3D'0'=20 name=3D'i18ns'=20 destinationEntity=3D'i18nTable'=20 multiplicityLowerBound=3D'0'=20 multiplicityUpperBound=3D'-1'=20 isClassProperty=3D'1'=20 deleteRule=3D'0'> <join sourceAttribute=3D'id' destinationAttribute=3D'masterId' /> </relation> So here I am trying to fetch my whole record set. The initial fetch is really fast but when I try to getI18ns() on each fetched object it result in a SQL SELECT. This mean that do 701 SELECTs where I intended to do only one... My whole fetch take 30 seconds to complete but the initial fetch (the master records with lots of columns) take barely a second. It would be really nice if I could group the fetch so that it resulted in a single SQL select (ok, maybe 2 : one for master records, one for all the i18ns). I understand why you really want lazy initialization. It's would be even slower to fetch all the database when you fetch an object with many relations. But having it optional, something like a recursiveFecth() would be really nice. So my question is : Is it possible to do so ? BTW The trick with FixedPoint works really well, tanks for the hints ! =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/Das2rhy5Fqn/MRARAiPuAJ9cBhJfvYBGSaSsnRLzMFSSLqMdyQCgh83G lE1qiL39SAG5tA2GV6bveG0=3D =3DpJz0 =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-09 15:26:33
|
By the way, if any of you needs to make FixedPoint work w/ python 2.1 you'll probably be interested in the patch I submitted today: https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D768353&group_= id=3D63070&atid=3D502758 Otherwise it will just not work (at least not as documented). -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-09 15:00:46
|
Yannick Gingras <yan...@sa...> wrote: > Argh ! >=20 > I took the email home and I forgot to fwd it at work. SF.net mailling > list archival is a few days late. Is it possible to submit the patch > to the patch tracker (=20 > https://sourceforge.net/tracker/?group_id=3D58935&atid=3D489337 ) ? The corrected patch (handling lists containing only one item) is there: cvs.diff.operator.in.not_in.patch.2.txt (If you already applied the previous one, simply revert it and apply this one instead) BTW this has nothing to do w/ FixedPoint and the handling of Custom Data Types --which needs no patch except for you own classes ;) =3D=3D> if it's this mail you'd like to re-get, email me privately I'll sent it back to you. -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-09 14:39:09
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Argh ! I took the email home and I forgot to fwd it at work. SF.net mailling list archival is a few days late. Is it possible to submit the patch to the patch tracker (=20 https://sourceforge.net/tracker/?group_id=3D58935&atid=3D489337 ) ? Thanks in advance ! : ) =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/DCkJrhy5Fqn/MRARAu2YAJ41Ewd8jHHaz3DoRDJdmhixAUxnUQCgltLY hRM7k9C52YKNhBZyAJ2NAO4=3D =3D5QfW =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-09 11:55:57
|
Hi all, I'm glad to announce that the cvs brch used to develop the PyModel (model expressed in plain python, rather than in an explicit xml file) is now explicitly closed; it has been integrated to the main trunk yesterday. It will included in the next release. PyModels are an *alternative* for xml models; both directly map to Modeling.Model instances when loaded at runtime. Both formats are and will be supported in the future. Many thanks to Mario who kept hassling me about a nicer and handier model representation than xml ;) and to all those who discussed the new feature. Since it will probably take some time before the documentation initiated by Mario is finalized, here is a overview of what has been done, and what still needs to be done. - The three scripts mdl_generate_python_code.py, mdl_generate_DB_schema.py and mdl_validate_model.py have been adapted to take either an xmlmodel or a pymodel as their argument. - Examples PyModels are integrated into testPackages, see Modeling/tests/testPackages/AuthorBooks/pymodel_AuthorBooks.py and Modeling/tests/testPackages/StoreEmployees/pymodel_StoreEmployees.py - Reminder: you can find more informations by referring to the thread where we discussed the PyModels: https://sourceforge.net/mailarchive/forum.php?thread_id=3D1702463&forum_id= =3D10674 https://sourceforge.net/mailarchive/forum.php?thread_id=3D1703927&forum_id= =3D10674 https://sourceforge.net/mailarchive/forum.php?thread_id=3D1755270&forum_id= =3D10674 - There is no script converting xml models to pymodels yet. Hence, the ZModeler still only dumps xml models. This will be integrated in the next release, probably mdl_convert_model.py Backward incompatibility: none intended, and none observed for the time being. Existing projects will continue to work as expected. Loading a model --------------- If you try the cvs version, you'll probably notice that method ModelSet.loadModel() (used in generated __init__.py, probably in your package as well) has been deprecated. It has been moved to module Model.loadModel(). The new scheme for loading a model (supporting PyModels as well as xml models), as included in __init__.py for generated packages, is shorter than the previous one: ------------------------------------------------------------------------ # Load the model MODEL_NAME=3D'your_model_name' from Modeling import ModelSet, Model if ModelSet.defaultModelSet().modelNamed(MODEL_NAME) is None: import os mydir =3D os.path.abspath(os.path.dirname(__file__)) model=3DModel.searchModel(MODEL_NAME, mydir, verbose=3D0) if not model: import warnings warnings.warn("Couldn't load model %s'%MODEL_NAME) else: ModelSet.defaultModelSet().addModel(model) ------------------------------------------------------------------------ (see docstring for Model.searchModel() for details) Known bugs: ----------- Only one for the moment being: if you need to load more than one PyModel at runtime, you'll probably encounter strange behaviour; this is due to defaults leaking from one model to the other one. A quick fix is to add: 'import PyModel; reload(PyModel)' before loading the pymodel. It'd be great if some of you could spend some time to test it and report. For those who do not want to use pserver-based cvs (which too often ends up with EOF received from sf), a cvs snapshot can be downloaded at: http://modeling.sourceforge.net/download/ModelingCore-0.9-pre-9.PyModel.tar= .gz And feel free to ask for more informations! Regards, -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-09 10:05:48
|
Yannick Gingras <ygi...@yg...> writes: > On Tuesday 08 July 2003 20:18, Sebastien Bigaret wrote: > > I wrote: > > > [...] BTW, have you looked at the python > > > cookbook or searched for any monetary handling package? I'm quite > > > confident such problems must have been already addressed somewhere, > > > it's probably worth the search. > > > > Instant Google search: > > http://fixedpoint.sourceforge.net/html/lib/module-FixedPoint.html >=20 > Nice ! >=20 > Will Modeling return some fixed points of shoulf I do the trick with > varchars ? Okay, I've thought a little about it and made some experimentations. This is a (yet undocumented) way of handling custom data types, i.e. data types that are not automatically handled by the core. I've made the following changes to test package AuthorBooks to adapt Book's attribute price to fixedPoint: - changed the model so that price is a string/VARCHAR (was: a float/NUMERIC(10,2)), - add _setPrice() and _getPrice() to AuthorBooks.Book (see the unified diff below) Now for the test: (remember to regenerate the DB schema!) >>> from fixedpoint import FixedPoint >>> from AuthorBooks.Book import Book >>> from Modeling.EditingContext import EditingContext >>> ec=3DEditingContext() >>> book=3DBook() >>> book.setTitle('Test FixedPoint') >>> book.setPrice(FixedPoint("3.341")) >>> book.getTitle(), book.getPrice() ('Test FixedPoint', FixedPoint('3.34', 2)) # PRECISION=3D=3D2 in Book.py >>> ec.insert(book) >>> ec.saveChanges() >>> book.getTitle(), book.getPrice() ('Test FixedPoint', FixedPoint('3.34', 2)) Here you can check in you db that it was stored as a varchar, as expected. Start a new python and test the fetch: >>> from fixedpoint import FixedPoint >>> from AuthorBooks.Book import Book >>> from Modeling.EditingContext import EditingContext >>> ec=3DEditingContext() >>> books=3Dec.fetch('Book') >>> books[0].getTitle(), books[0].getPrice() ('Test FixedPoint', FixedPoint('3.34', 2)) As you can see, FixedPoint is now correctly handled by the framework. Nice, isn't it?! Behind the scenes: here you can see the KeyValueCoding in action. The framework *always* accesses the attributes' values with the so-called "private" methods (storedValueForKey(), takeStoredValueForKey()). Look at their documentation, you'll see that they will try to use private setters/getters --such as _setPrice() and _getPrice()-- before the public ones --being getPrice() and setPrice()). So: 1. when the framework is about to save the data, it ends up calling _getprice(), which gently returns the corresponding string, =20=20=20=20 [same for validation before saving: type checking also calls _getPrice() and gets a string, so everything's ok] =20=20=20=20 2. when the framework fetches the data, price() is initialized with _setPrice() which turns the string back to FixedPoint. It's probably what you need here ;) However, attention please: I've never used this in real projects (never needed it), so I cannot guarantee that you won't get any errors at some point. If you choose that way and encounter errors or unexpected behaviour, please report, I'll try to my best to support this. And I'd be interested in success report too ;) -- S=E9bastien. ------------------------------------------------------------------------ Index: Book.py =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvsroot/modeling/ProjectModeling/Modeling/tests/testPackages/Aut= horBooks/Book.py,v retrieving revision 1.4 diff -u -r1.4 Book.py --- Book.py 27 Mar 2003 11:47:57 -0000 1.4 +++ Book.py 9 Jul 2003 09:52:24 -0000 @@ -1,7 +1,9 @@ # Modeling from Modeling.CustomObject import CustomObject from Modeling.Validation import ValidationException +from fixedpoint import FixedPoint =20 +PRECISION=3D2 =20 class Book (CustomObject): """ @@ -61,7 +63,14 @@ "Change the Book / price attribute value" self.willChange() self._price =3D price + self._price.precision=3DPRECISION =20=20=20 + def _setPrice(self, value): + self._price =3D FixedPoint(value, PRECISION) +=20=20=20=20 + def _getPrice(self): + return str(self._price) +=20=20=20=20 def validatePrice(self, value): "Edit this to enforce custom business logic" if 0: # your custom bizlogic Index: model_AuthorBooks.xml =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvsroot/modeling/ProjectModeling/Modeling/tests/testPackages/Aut= horBooks/model_AuthorBooks.xml,v retrieving revision 1.6 diff -u -r1.6 model_AuthorBooks.xml --- model_AuthorBooks.xml 7 May 2003 11:27:10 -0000 1.6 +++ model_AuthorBooks.xml 9 Jul 2003 09:36:23 -0000 @@ -27,7 +27,7 @@ <attribute isClassProperty=3D'1' columnName=3D'title' name=3D'title' i= sRequired=3D'1' precision=3D'0' defaultValue=3D'None' externalType=3D'VARCH= AR' width=3D'40' scale=3D'0' type=3D'string' displayLabel=3D'Title'/> <attribute isClassProperty=3D'1' columnName=3D'id' name=3D'id' isRequi= red=3D'1' precision=3D'0' defaultValue=3D'0' externalType=3D'INT' width=3D'= 0' scale=3D'0' type=3D'int' displayLabel=3D''/> <attribute isClassProperty=3D'0' columnName=3D'FK_WRITER_ID' name=3D'= FK_Writer_Id' isRequired=3D'0' precision=3D'0' defaultValue=3D'None' extern= alType=3D'INTEGER' width=3D'0' scale=3D'0' type=3D'string' displayLabel=3D'= '/> - <attribute isClassProperty=3D'1' columnName=3D'PRICE' name=3D'price' = isRequired=3D'0' precision=3D'10' defaultValue=3D'None' externalType=3D'NUM= ERIC' width=3D'0' scale=3D'2' type=3D'float' displayLabel=3D''/> + <attribute isClassProperty=3D'1' columnName=3D'PRICE' name=3D'price' = isRequired=3D'0' precision=3D'0' defaultValue=3D'None' externalType=3D'VARC= HAR' width=3D'10' scale=3D'0' type=3D'string' displayLabel=3D''/> <relation deleteRule=3D'0' isClassProperty=3D'1' multiplicityUpperBoun= d=3D'1' multiplicityLowerBound=3D'0' destinationEntity=3D'Writer' name=3D'a= uthor' displayLabel=3D'' joinSemantic=3D'0'> <join sourceAttribute=3D'FK_Writer_Id' destinationAttribute=3D'id'/> </relation> ------------------------------------------------------------------------ |
From: Yannick G. <ygi...@yg...> - 2003-07-09 02:38:33
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 08 July 2003 20:18, Sebastien Bigaret wrote: > I wrote: > > [...] BTW, have you looked at the python > > cookbook or searched for any monetary handling package? I'm quite > > confident such problems must have been already addressed somewhere, > > it's probably worth the search. > > Instant Google search: > http://fixedpoint.sourceforge.net/html/lib/module-FixedPoint.html Nice ! Will Modeling return some fixed points of shoulf I do the trick with varchars ? - -- Yannick Gingras Coder for OBB : O'er Beamish Byssus http://OpenBeatBox.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE/Cwtmrhy5Fqn/MRARAr7eAJ9up6glOKvIaPkmG6CGQdbrKK8yLACcDNX5 wR6vPmtgvgwDzOR4IJKroc8= =hjls -----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-09 00:19:10
|
I wrote: > [...] BTW, have you looked at the python > cookbook or searched for any monetary handling package? I'm quite > confident such problems must have been already addressed somewhere, > it's probably worth the search. Instant Google search: http://fixedpoint.sourceforge.net/html/lib/module-FixedPoint.html |
From: Jerome K. <Jer...@fi...> - 2003-07-08 22:46:45
|
On Wed, Jul 09, 2003 at 12:16:56AM +0200, Sebastien Bigaret wrote: And once again >>> a= 7.85 >>> a 7.8499999999999996 >>> print a 7.85 >>> This drive me crazy the first time i write some code w/ money . |
From: Yannick G. <ygi...@yg...> - 2003-07-08 22:25:49
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 08 July 2003 17:19, you wrote: > Fixed point doesn't exist in the default python so . You always retrieve > a numeric but it's map to > a float in python. And please be carefull about this as, python do some > strange round while > using print() and stuff like that . I got a quick explain here in my > bookmark but it's in french . > > >>> 6+1 > 7 this is a integer > >>> 6.0 + 1 > 7.0 this is a float > > >>> print "%s" % ( 6.0 + 1) > 7.0 > >>> print "%d" % ( 6.0 + 1) > 7 Argh ! >>> val = 1.1 >>> val 1.1000000000000001 I just created a small amount of money from nowhere. A really small one but imprecision of floats adds up and financial controllers really hate when some money appears from nowhere... >>> 45.678 45.677999999999997 I just lost a small amount of money in the deep void. A really small one but... How about : class FixedPoint: def __init__(self): self._val = 0 # int self._pointPos = 0 # int too With a few operators overloaded it would map nicely with the SQL decimal and could easily serialize to a simple tuple (like mx.DateTime.tuple()) If it make sense I can send a sketchy class that could be shipped with Modeling. - -- Yannick Gingras Coder for OBB : Omniscient Behaviouristic Baroque http://OpenBeatBox.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE/CtAqrhy5Fqn/MRARAqNyAJ9w3WxZSYiJpQDO1lbWtFSLexBZxwCfWr4T b96i1XQ5ozrPY8N7UmSn6l4= =Q7Uf -----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-08 22:17:31
|
Yannick Gingras <yan...@sa...> wrote: > What are you guys doing with monetary values ? >=20 > mx.DateTime is a really nice class for dates management. Is there a class > out there for money ? >=20 > I'm not certain I fully understand how Modeling treats the SQL decimal > type. Decimal should be fixed points but retrieving and attribute > defined by : > <attribute displayLabel=3D'' > scale=3D'4' > name=3D'currencyRate' > isRequired=3D'1' > externalType=3D'numeric' > defaultValue=3D'None' > isClassProperty=3D'1'=20 > precision=3D'10'=20 > columnName=3D'currency_rate' > type=3D'string' > width=3D'0'/> > gives a float which is not quite fixed point... Is there anything wrong > with my model ? (It could have helped to know what you are getting, and what you were expecting --I simply assume that this was an approximation problem, if not, please be more explicit). As Soaf said: > Fixed point doesn't exist in the default python so . You always retrieve a > numeric but it's map to a float in python. And please be carefull about > this as, python do some strange round while using print() and stuff like > that . --> The problem here is python combined with the adaptor, this is not a problem with the framework in itself. Simply look at this (using the test package AuthorBooks): >>> import psycopg >>> cnx=3Dpsycopg.connect(dsn=3D"host=3Dlocalhost dbname=3DAUTHOR_BOOKS use= r=3Dpostgres") >>> cur=3Dcnx.cursor() >>> cur.execute("INSERT INTO BOOK(id,title,price) VALUES(5,'Test Title',7.9= 2)") >>> cnx.commit() >>> cur.execute('SELECT * FROM BOOK WHERE id=3D5') >>> cur.fetchone() (7.9199999999999999, 'Test Title', 5, None) Since your column is NUMERIC(10,4), any python adaptor (all, but sqlite for which all types are strings) will return a float, and there's nothing like fixed point float in python. So you'll get a value inherently dependent to the binary representation of the value, see: >>> 7.48 7.4800000000000004 >>> 7.85 7.8499999999999996 >>> 7.85=3D=3D7.8499999999999996 1 However, I noticed that your python type is _string_, I presume this is intended. What if you store the value in a VARCHAR? You'll stay away from these problems, then. BTW, have you looked at the python cookbook or searched for any monetary handling package? I'm quite confident such problems must have been already addressed somewhere, it's probably worth the search. Regards, -- S=E9bastien. |
From: SoaF at H. <so...@la...> - 2003-07-08 21:21:51
|
Seb please look at sf to fix this f**king 'reply to the list' :) I foward my reply |
From: Yannick G. <yan...@sa...> - 2003-07-08 21:10:27
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 What are you guys doing with monetary values ? mx.DateTime is a really nice class for dates management. Is there a class o= ut=20 there for money ? I'm not certain I fully understand how Modeling treats the SQL decimal type. Decimal should be fixed points but retrieving and attribute defined by : <attribute displayLabel=3D'' scale=3D'4' name=3D'currencyRate' isRequired=3D'1' externalType=3D'numeric' defaultValue=3D'None' isClassProperty=3D'1'=20 precision=3D'10'=20 columnName=3D'currency_rate' type=3D'string' width=3D'0'/> gives a float which is not quite fixed point... Is there anything wrong wi= th=20 my model ? =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/CzNArhy5Fqn/MRARAvC+AJ9sJnGdtYyxlT0RGKiwLdJZSA5ODQCdFEKh ZhVUA+S5H5N2i7JC4lBiqSw=3D =3DP1iF =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-08 18:57:06
|
Nice try, but that was not the place where the test should go. Since you tried to dive into the quite complex class SQLExpression, I'll be a little verbose and "techy" in the explanation ;) This actually works because when the sql is built for this qualifier, it ultimately tries to format the value for your attribute; since your attribute is an integer, you end up with sqlStringForNumber() being called. But then, you'll still need to be replicate the same code for strings, floats, etc. to support list with one item only. Moreover, these methods are intended to work with a single value of a given type, not with lists! That's why I added method sqlStringForInOperatorValue. However, your proposal immediately rang a bell and I thank you again, indeed, because it revealed a flaw in the last patch I posted a few minutes ago: when it comes to operators 'in' and 'not in', it's a non-sense to call either of these methods (e.g. sqlStringForNumber, triggered by sqlStringForKeyValueQualifier, through sqlStringForValue and formatValueForAttribute). =3D> please un-apply the previous patch and use that one instead. -- S=E9bastien. PS: To be completely honest there's a remaining problem here, because each value in the list should be formatted depending on the attribute's type (for example, with sqlStringForNumber() for an int attribute). However, you'll *never* encounter any problem unless you build your qualifiers instances explicitly (vs. from a string) with a list of values that need to be formatted (Date is a good example). I've no time for this now, but this will obviously be tested and corrected before it is committed into CVS. ------------------------------------------------------------------------ --- SQLExpression.py.ko Tue Jul 8 20:19:19 2003 +++ SQLExpression.py Tue Jul 8 20:34:53 2003 @@ -1190,9 +1190,6 @@ key=3DaQualifier.key() value=3DaQualifier.value() =20=20=20=20=20 - if aQualifier.operator() in (QualifierOperatorIn, QualifierOperatorNot= In): - value=3Dtuple(value) -=20=20=20=20=20=20 if not caseInsensitive: operatorStr=3Dself.sqlStringForSelector(aQualifier.operator(), value) =20 @@ -1201,13 +1198,24 @@ value=3Dself.sqlPatternFromShellPattern(value) =20 keyString=3Dself.sqlStringForAttributeNamed(key) - valueString=3Dself.sqlStringForValue(value, key) + if aQualifier.operator() in (QualifierOperatorIn, QualifierOperatorNot= In): + valueString=3Dself.sqlStringForInOperatorValue(value) + else: + valueString=3Dself.sqlStringForValue(value, key) =20 if not caseInsensitive: return keyString+' '+operatorStr+' '+valueString else: return self.sqlStringForCaseInsensitiveLike(keyString, valueString) =20 + def sqlStringForInOperatorValue(self, aList): + """ + """ + if len(aList)=3D=3D1: + return '(%s)'%aList[0] + else: + return str(tuple(aList)) +=20=20=20=20 def sqlStringForNegatedQualifier(self, aQualifier): """ Returns the SQL string for the supplied Qualifier ------------------------------------------------------------------------ Yannick Gingras <yan...@sa...> wrote: > On July 8, 2003 01:42 pm, Yannick Gingras wrote: > > Argh, I think we have a problem when there is only one elem in the > > choice list : > > > > <Fault 1: 'Modeling.Adaptor.GeneralAdaptorException:Couldn\'t evaluate > > expression SELECT t0.id, t0.gl_id, t0.fs2_id FROM FSLINK t0 WHERE > > (t0.fs2_id NOT IN (4,) AND t0.gl_id LIKE 1). Reason: > > _mysql_exceptions.ProgrammingError:(1064, "You have an error in your SQL > > syntax near \') AND t0.gl_id LIKE 1)\' at line 1")'> > > > > I don't think that : "NOT IN (4,)" is legal... >=20 > I don't know if it's where the test should go but the following patch > fix the problem : >=20 > --- SQLExpression.old.py 2003-07-08 13:47:22.000000000 -0400 > +++ SQLExpression.py 2003-07-08 14:08:58.000000000 -0400 > @@ -1235,7 +1235,10 @@ > See also: formatValueForAttribute() > """ > if aNumber is not None: > return str(aNumber) > + if type(aNumber) =3D=3D type((1,)) and len(aNumber) =3D=3D 1: > + return "(%d)" % aNumber > + else: > + return str(aNumber) > else: return 'NULL' >=20 > def sqlStringForQualifier(self, aQualifier): >=20 > --=20 > Yannick Gingras > Byte Gardener, Savoir-faire Linux inc. > (514) 276-5468 >=20 >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email sponsored by: Parasoft > Error proof Web apps, automate testing & more. > Download & eval WebKing and get a free book. > www.parasoft.com/bulletproofapps > _______________________________________________ > Modeling-users mailing list > Mod...@li... > https://lists.sourceforge.net/lists/listinfo/modeling-users |
From: Sebastien B. <sbi...@us...> - 2003-07-08 18:28:08
|
Yannick Gingras <yan...@sa...> wrote: > Argh, I think we have a problem when there is only one elem in the > choice list : >=20 > <Fault 1: 'Modeling.Adaptor.GeneralAdaptorException:Couldn\'t evaluate=20 > expression SELECT t0.id, t0.gl_id, t0.fs2_id FROM FSLINK t0 WHERE (t0.fs2= _id=20 > NOT IN (4,) AND t0.gl_id LIKE 1). Reason:=20 > _mysql_exceptions.ProgrammingError:(1064, "You have an error in your SQL= =20 > syntax near \') AND t0.gl_id LIKE 1)\' at line 1")'> >=20 > I don't think that : "NOT IN (4,)" is legal... No it isn't! Thanks for reporting. Apply the enclosed patch to your version, it should solve the problem. -- S=E9bastien. ------------------------------------------------------------------------ --- SQLExpression.py Tue Jul 8 20:19:19 2003 +++ SQLExpression.py Tue Jul 8 20:20:13 2003 @@ -1191,7 +1191,7 @@ value=3DaQualifier.value() =20=20=20=20=20 if aQualifier.operator() in (QualifierOperatorIn, QualifierOperatorNot= In): - value=3Dtuple(value) + value=3Dself.sqlStringForInOperatorValue(value) =20=20=20=20=20=20=20 if not caseInsensitive: operatorStr=3Dself.sqlStringForSelector(aQualifier.operator(), value) @@ -1208,6 +1208,14 @@ else: return self.sqlStringForCaseInsensitiveLike(keyString, valueString) =20 + def sqlStringForInOperatorValue(self, aList): + """ + """ + if len(aList)=3D=3D1: + return '(%s)'%aList[0] + else: + return str(tuple(aList)) +=20=20=20=20 def sqlStringForNegatedQualifier(self, aQualifier): """ Returns the SQL string for the supplied Qualifier ------------------------------------------------------------------------ |
From: Yannick G. <yan...@sa...> - 2003-07-08 18:13:32
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 8, 2003 01:42 pm, Yannick Gingras wrote: > Argh, I think we have a problem when there is only one elem in the > choice list : > > <Fault 1: 'Modeling.Adaptor.GeneralAdaptorException:Couldn\'t evaluate > expression SELECT t0.id, t0.gl_id, t0.fs2_id FROM FSLINK t0 WHERE > (t0.fs2_id NOT IN (4,) AND t0.gl_id LIKE 1). Reason: > _mysql_exceptions.ProgrammingError:(1064, "You have an error in your SQL > syntax near \') AND t0.gl_id LIKE 1)\' at line 1")'> > > I don't think that : "NOT IN (4,)" is legal... I don't know if it's where the test should go but the following patch fix the problem : =2D --- SQLExpression.old.py 2003-07-08 13:47:22.000000000 -0400 +++ SQLExpression.py 2003-07-08 14:08:58.000000000 -0400 @@ -1235,7 +1235,10 @@ See also: formatValueForAttribute() """ if aNumber is not None: =2D - return str(aNumber) + if type(aNumber) =3D=3D type((1,)) and len(aNumber) =3D=3D 1: + return "(%d)" % aNumber + else: + return str(aNumber) else: return 'NULL' def sqlStringForQualifier(self, aQualifier): =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/CwnKrhy5Fqn/MRARAkczAJ47nHXmbSmCKeJ3oSbWU6fIW43VKgCfaw/h vgAAZO7sMF9YCjqWcJGYZU8=3D =3DyXzC =2D----END PGP SIGNATURE----- |