Thread: [Modeling-users] Fetching raw rows
Status: Abandoned
Brought to you by:
sbigaret
From: Sebastien B. <sbi...@us...> - 2003-07-14 19:38:34
|
Hi all, I've submitted patch #77168: https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D771168&group_= id=3D58935&atid=3D489337 which enables the ability to fetch raw rows (i.e. you get dictionaries) instead of fully-initialized objects. I remember we already discussed this feature some time ago with some of you already. Usage ----- either use FetchSpecification.setFetchesRawRows(1) or directly: EditingContext.fetch(..., rawRows=3D1, ...) Quick benchs ------------ On 5000 db-raws, I get =20=20=20=20 - normal fetch (full objects): 5.6s. - raw rows (dictionaries):=20 Tested on postgresql and mysql. -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-14 19:44:46
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 14, 2003 03:38 pm, Sebastien Bigaret wrote: > - normal fetch (full objects): 5.6s. > > - raw rows (dictionaries): Wow ! the time is null ! ; ) Thanks S=E9bastien, this one is REALLY appreciated here. : ) =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/Ewgsrhy5Fqn/MRARAujeAJ9dhm2gpFz0vV8bUfVFY386NH/Q6wCfe5XJ mwN7Imn+uKIYcwESOWXkN5U=3D =3DC2ty =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-14 20:01:10
|
Yannick Gingras <yan...@sa...> wrote: > On July 14, 2003 03:38 pm, Sebastien Bigaret wrote: > > - normal fetch (full objects): 5.6s. > > > > - raw rows (dictionaries): >=20 > Wow ! the time is null ! Impressive isn't it?-) Here is the full version (fetching 5000 objects): - normal fetch (full objects): 5.6s. =20=20=20=20 - raw rows (dictionaries): 0.40s -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-14 20:28:59
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 14, 2003 04:00 pm, Sebastien Bigaret wrote: > Impressive isn't it?-) > > Here is the full version (fetching 5000 objects): > > - normal fetch (full objects): 5.6s. > =20 > - raw rows (dictionaries): 0.40s > > -- S=E9bastien. =46or some reasons, I nerver experience such amazing performance boosts but a quick test here give shows that my fetch time drop to 1/3 of what it was. : D Seb, you're the man ! =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/ExKJrhy5Fqn/MRARAtviAJ9SNQgbh5Ba0t4K94iTY7KjaZ3BWACgkHEB 6hv4ZcAeqc1kLyrpyZTPaYM=3D =3De9W8 =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-16 14:01:02
|
Yannick Gingras <yan...@sa...> wrote: > For some reasons, I nerver experience such amazing performance boosts Must be because the objects I used for benchmarking only had a PK and one attribute! Okay, back to the functionality: as it is made in the patch, fetchesRawRo= ws misses two important functionalities: 1. it must behave the way a normal fetch behaves. This means the inserted objects must be present, while deleted objects shouldn't be returned. 2. It does not work at all for nested ECs. I thought that those of you who are already using the patch should be aware of this. I'm currently working on both problems. Unittests are written, now I'm on the code itself. When integrating this into the CVS, it will behave as expected in both situations. I'll report then here. -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-17 14:44:03
|
I wrote: > Okay, back to the functionality: as it is made in the patch, fetchesRaw= Rows > misses two important functionalities: >=20 > 1. it must behave the way a normal fetch behaves. This means the insert= ed > objects must be present, while deleted objects shouldn't be returned. >=20 > 2. It does not work at all for nested ECs. >=20 > I thought that those of you who are already using the patch should be > aware of this. >=20 > I'm currently working on both problems. Unittests are written, now I'm > on the code itself. When integrating this into the CVS, it will behave > as expected in both situations. I'll report then here. Full functionality has been integrated in cvs yesterday evening, and I've completed the documentation today. All this will be in the next release. -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-07-17 16:55:59
|
On jeudi, juil 17, 2003, at 16:43 Europe/Amsterdam, Sebastien Bigaret wrote: > > I wrote: >> Okay, back to the functionality: as it is made in the patch, >> fetchesRawRows >> misses two important functionalities: >> >> 1. it must behave the way a normal fetch behaves. This means the >> inserted >> objects must be present, while deleted objects shouldn't be >> returned. >> >> 2. It does not work at all for nested ECs. >> >> I thought that those of you who are already using the patch should be >> aware of this. >> >> I'm currently working on both problems. Unittests are written, now >> I'm >> on the code itself. When integrating this into the CVS, it will >> behave >> as expected in both situations. I'll report then here. > > Full functionality has been integrated in cvs yesterday evening, and > I've > completed the documentation today. All this will be in the next > release. Attempting, and failing, to keep up with you... ;) Anyhow, just a small clarification: this means that raw fetches must in any case exist within an editing context? I.e if the raw fetch implies objects not yet in the EC, these objects are loaded, and "CustomObject initialised", into the EC? Given the same db, what is the performance difference between the following fetches (for logically equivalent queries)? - 1st time "classic" fetch (in empty EC) - 2nd time "classic" fetch (objects in resultset already known to EC) - 1st time raw fetch (in empty EC) - 2nd time raw fetch (objects in resultset already known to EC) - 1st time dbapi2.0 execute query (direct via python adaptor) - 2nd time dbapi2.0 execute query (direct via python adaptor) It would be interesting to keep an eye on these values, for a particular setup, thus when changes to the system are made, unexpected performance side effects may still be observed. Maybe such a script can be added to the tests? And, the classic fetch may be further broken up into two, one built with Qualifiers and the other with RawQualifiers (as per recent thread), to keep an eye on this known possible bottleneck on the system. mario |
From: Sebastien B. <sbi...@us...> - 2003-07-17 17:23:42
|
Mario Ruggier <ma...@ru...> wrote: > On jeudi, juil 17, 2003, at 16:43 Europe/Amsterdam, Sebastien Bigaret wro= te: Hey, you probably meant: "at 16:43 Europe/Paris" ;) > > I wrote: > >> Okay, back to the functionality: as it is made in the patch, > >> fetchesRawRows > >> misses two important functionalities: > >> > >> 1. it must behave the way a normal fetch behaves. This means the ins= erted > >> objects must be present, while deleted objects shouldn't be retur= ned. > >> > >> 2. It does not work at all for nested ECs. > >> > >> I thought that those of you who are already using the patch should be > >> aware of this. > >> > >> I'm currently working on both problems. Unittests are written, now I= 'm > >> on the code itself. When integrating this into the CVS, it will beha= ve > >> as expected in both situations. I'll report then here. > > > > Full functionality has been integrated in cvs yesterday evening, and = I've > > completed the documentation today. All this will be in the next relea= se. >=20 > Attempting, and failing, to keep up with you... ;) Do you mean that you tested it and it failed? I've committed today afternoon slight changes, because of an unhandled situation when using nested ECs. However it shouldn't have failed in a "standard" situation. Could you be more explicit? > Anyhow, just a small clarification: this means that raw fetches must > in any case exist within an editing context? I.e if the raw fetch > implies objects not yet in the EC, these objects are loaded, and > "CustomObject initialised", into the EC? It does not matter whether the objects are already loaded or not. And fetching raw rows does not create any object, never. When I said: > > 1. it must behave the way a normal fetch behaves. This means the > > inserted objects must be present, while deleted objects > > shouldn't be returned. I meant that inserted objects should appear when a fetch for raw rows was made (at least, if the object would appear in a normal fetch: remember that if you insert, say, a Book, then fetch all books, you'll get the new one along with the others --even if the new object is not saved in the database yet. I've just added a section in the User's Guide about this today! BTW same for deleted objects: they shouldn't appear in the result set of a fetch (objects or raw rows).=20 For completeness, I'll add that when a object is modified, it should appear in its modified state in the result set. > Given the same db, what is the performance difference between > the following fetches (for logically equivalent queries)? > - 1st time "classic" fetch (in empty EC) > - 2nd time "classic" fetch (objects in resultset already known to EC) > - 1st time raw fetch (in empty EC) > - 2nd time raw fetch (objects in resultset already known to EC) > - 1st time dbapi2.0 execute query (direct via python adaptor) > - 2nd time dbapi2.0 execute query (direct via python adaptor) >=20 > It would be interesting to keep an eye on these values, for a particular > setup, thus when changes to the system are made, unexpected performance > side effects may still be observed. Maybe such a script can be added to > the tests? I've a test db w/ 5000 simple objects, I'll try some test and report. This could be added in the unittests, right. > And, the classic fetch may be further broken up into two, one built with > Qualifiers and the other with RawQualifiers (as per recent thread), to ke= ep > an eye on this known possible bottleneck on the system. Sorry, I think I won't do this: we've just clarified the API, I do not feel like splitting the fetch in two. However, what I *will* do is: 1. document the alternate (and less cpu-consuming) way of building qualifiers, 2. add a section in the user's guide dealing with performance tuning, and explaining this particular point among others. I think this should be enough, isn't it? -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-07-18 11:58:13
|
On Jeudi, juil 17, 2003, at 19:23 Europe/Valletta, Sebastien Bigaret=20 wrote: > > Mario Ruggier <ma...@ru...> wrote: >> On jeudi, juil 17, 2003, at 16:43 Europe/Amsterdam, Sebastien Bigaret=20= >> wrote: > > Hey, you probably meant: "at 16:43 Europe/Paris" ;) Ha! It's annoying isn't it? It insists on adding the time and place of=20= the message being replied to, but the place is the receiving end (or, worse, where=20= it thinks the receiving end is -- when setting is Geneva, it uses Zurich! Hey, i'd=20 rather be in Amsterdam than in Zurich ;). Plus, there is some language interference=20= between my settings and the system settings... Anyway, it is Apple Mail 1.2.5=20 client, and actually if anyone knows how to configure the leading text that is auto=20= added to replies, i'd like to know! (not found how to do it yet) >>> Full functionality has been integrated in cvs yesterday evening,=20= >>> and I've >>> completed the documentation today. All this will be in the next=20 >>> release. >> >> Attempting, and failing, to keep up with you... ;) > > Do you mean that you tested it and it failed? I've committed today > afternoon slight changes, because of an unhandled situation when using > nested ECs. However it shouldn't have failed in a "standard" = situation. > Could you be more explicit? No, did not try it. Failing -- in terms of keeping up with your dev=20 rhythm ;) ... > I've a test db w/ 5000 simple objects, I'll try some test and > report. This could be added in the unittests, right. Thanks for the numbers! It helps with the overall picture of what goes=20= on... However, just a question about the raw fetch: > [raw row] 1st fetch (ec empty): 0.618131995201 > [raw row] 2nd fetch (objects already loaded): 0.61448597908 > [raw row] 3rd fetch (objects already loaded): 2.24008309841 ... > 2. You probably already noticed that fetching raw rows is > significantly slower when the objects are loaded. The reason is > that objects already loaded are checked for modification, because > as I explained it in my previous post we have to return the > modifications, not the fetched data, if an object has been > modified. OK, i see why the 2nd fetch in this case could be significantly longer. But, why the 3rd? Barring any modifications to the loaded objects, there should not be any differences between 2nd and subsequent raw fetches? >> And, the classic fetch may be further broken up into two, one built=20= >> with >> Qualifiers and the other with RawQualifiers (as per recent thread),=20= >> to keep >> an eye on this known possible bottleneck on the system. > > Sorry, I think I won't do this: we've just clarified the API, I do=20= > not > feel like splitting the fetch in two. However, what I *will* do is: > > 1. document the alternate (and less cpu-consuming) way of building > qualifiers, > > 2. add a section in the user's guide dealing with performance = tuning, > and explaining this particular point among others. > > I think this should be enough, isn't it? I only meant have two versions of a sample qualifier for a specific=20 query, and use it in an internal test/profiling script... nothing at all for=20 the public api! > -- S=E9bastien. mario |
From: Sebastien B. <sbi...@us...> - 2003-07-18 12:18:45
|
> >> Attempting, and failing, to keep up with you... ;) > > > > Do you mean that you tested it and it failed? I've committed today > > afternoon slight changes, because of an unhandled situation when using > > nested ECs. However it shouldn't have failed in a "standard" situation. > > Could you be more explicit? >=20 > No, did not try it. Failing -- in terms of keeping up with your dev rhyth= m ;) Okay... I've sometimes problem at correctly interpreting english, especially after twilight! >=20 > Thanks for the numbers! It helps with the overall picture of what goes on= ... > However, just a question about the raw fetch: >=20 > > [raw row] 1st fetch (ec empty): 0.618131995201 > > [raw row] 2nd fetch (objects already loaded): 0.61448597908 > > [raw row] 3rd fetch (objects already loaded): 2.24008309841 > ... > > 2. You probably already noticed that fetching raw rows is > > significantly slower when the objects are loaded. The reason is > > that objects already loaded are checked for modification, because > > as I explained it in my previous post we have to return the > > modifications, not the fetched data, if an object has been > > modified. >=20 > OK, i see why the 2nd fetch in this case could be significantly longer. > But, why the 3rd? Barring any modifications to the loaded objects, > there should not be any differences between 2nd and subsequent > raw fetches? Sorry, a raw copy-paste between two functions and you get the wrong message. it should be read: 1st fetch (ec empty) 2nd fetch (ec stills empty) 3rd fetch (objects already loaded) So you get what was explained: slower if all objects are already loaded. > I only meant have two versions of a sample qualifier for a specific query, > and use it in an internal test/profiling script... nothing at all for the > public api! Seems like I was at least half asleep yesterday's evening! -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-17 18:36:52
|
Mario Ruggier <ma...@ru...> wrote: > Given the same db, what is the performance difference between > the following fetches (for logically equivalent queries)? > - 1st time "classic" fetch (in empty EC) > - 2nd time "classic" fetch (objects in resultset already known to EC) > - 1st time raw fetch (in empty EC) > - 2nd time raw fetch (objects in resultset already known to EC) > - 1st time dbapi2.0 execute query (direct via python adaptor) > - 2nd time dbapi2.0 execute query (direct via python adaptor) Here are the figures. Test database: 5000 objects with 3 attributes: a PK, a FK (to-one relation to an other object of a different type), and a text field. [std] 1st fetch : 7.20251297951 [std] 2nd fetch : 1.03094005585 [raw row] 1st fetch (ec empty): 0.618131995201 [raw row] 2nd fetch (objects already loaded): 0.61448597908 [raw row] 3rd fetch (objects already loaded): 2.24008309841 [psycopg] 1st fetch: 0.038547039032 [psycopg] 2nd fetch: 0.0789960622787 Comments:=20 1. no surprise, fetching real objects really takes much more time than simple fetch w/ psycopg, and raw psycopg fetch is the fastest. Maybe it's time for me to study the fetching process in details, to see where it can be enhanced. This could be done after 0.9-pre-10, i.e. after finishing the documentation for PyModels first. 2. You probably already noticed that fetching raw rows is significantly slower when the objects are loaded. The reason is that objects already loaded are checked for modification, because as I explained it in my previous post we have to return the modifications, not the fetched data, if an object has been modified. I'm currently studying this, I have an implementation that does not consume more time when no objects are modified: [raw row] 1st fetch (ec empty): 0.595005989075 [raw row] 2nd fetch : 0.585139036179 [raw row] 3rd fetch (objects already loaded): 0.607128024101 However, still, we cannot avoid the additional payload when some objects are modified, and the more modified objects we have, the slower will be the fetch (first figure, 2.24, would be the upper limit here, when all objects are modified). Second, I do not want to commit this quicker implementation now, because a problem remains: if the database has been changed in the mean time, you can get raw rows whose value are not the same then _unmodified_ objects in the EditingContext. I'm not sure if this is a significant problem, or put it differently, if we have to pay for extra cpu-time for ensuring that this does not happen. But I feel a little touchy in making a exception in the general rule. > It would be interesting to keep an eye on these values, for a particular > setup, thus when changes to the system are made, unexpected performance > side effects may still be observed. Maybe such a script can be added to > the tests? Sorry, I did not read you right: the idea of observing these figures to detect the impacts of changes on performance is indeed a very good idea. That will be done, sure. Cheers, -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-17 23:35:22
|
Sebastien Bigaret <sbi...@us...> wrote: > [std] 1st fetch : 7.20251297951 > [std] 2nd fetch : 1.03094005585 >=20 > [raw row] 1st fetch (ec empty): 0.618131995201 > [raw row] 2nd fetch (objects already loaded): 0.61448597908 > [raw row] 3rd fetch (objects already loaded): 2.24008309841 >=20 > [psycopg] 1st fetch: 0.038547039032 > [psycopg] 2nd fetch: 0.0789960622787 I guess I had no chance when running [std], i.e. standard EditingContext.fetch() on 5000 objects. Here are corrected figures, average for 5 executions: Python2.1 --------- normal: [std] 1st fetch : 6.74 [std] 2nd fetch : 0.96 =20=20=20=20=20=20=20=20 python -O: [std] 1st fetch : 6.25 [std] 2nd fetch : 0.91 Python2.2 --------- normal: [std] 1st fetch : 5.80 [std] 2nd fetch : 0.90 =20=20=20=20=20=20=20=20 python -O: [std] 1st fetch : 5.30 [std] 2nd fetch : 0.83 By curiosity, I played ~20 minutes w/ psyco. After some tries, with the following code on top on the script: ------------------------------------------------------------------------ from threading import _RLock from Modeling.DatabaseContext import DatabaseContext from Modeling.DatabaseChannel import DatabaseChannel from Modeling.ClassDescription import classDescriptionForName psyco.bind(DatabaseContext.initializeObject) psyco.bind(DatabaseChannel.fetchObject) psyco.bind(classDescriptionForName) psyco.bind(_RLock.acquire) psyco.bind(_RLock.release) ------------------------------------------------------------------------ I got the following figure: Python2.1+psycho ---------------- normal: [std] 1st fetch : 5.70 [std] 2nd fetch : 0.81 python -O: [std] 1st fetch : 5.52 [std] 2nd fetch : 0.79 Python2.2+psycho ---------------- normal: [std] 1st fetch : 4.32 [std] 2nd fetch : 0.72 python -O: [std] 1st fetch : 4.21 [std] 2nd fetch : 0.68 Note on psycho: this was just tuned for fetching, and for fetching with very simple objects, that's all. This would need further investigation, obviously. But since I tried it, I thought I could share, I find it amazing to be able to get a significant performance imrpovement just a few minutes after having installed it. That's more meat for the forthcoming 'tuning performance' section in the guide. As a general conclusion, py2.2 is faster than py2.1, and the '-O' option is definitively worth the try. BTW: who is using the framework w/ python2.1 alone? And with py2.1 and (because of) zope? -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-18 12:41:29
|
I wrote: [...] > [raw row] 1st fetch (ec empty): 0.618131995201 > [raw row] 2nd fetch (objects already loaded): 0.61448597908 > [raw row] 3rd fetch (objects already loaded): 2.24008309841 [...] > 2. You probably already noticed that fetching raw rows is > significantly slower when the objects are loaded. The reason is > that objects already loaded are checked for modification, because > as I explained it in my previous post we have to return the > modifications, not the fetched data, if an object has been > modified. >=20 > I'm currently studying this, I have an implementation that does not > consume more time when no objects are modified: >=20 > [raw row] 1st fetch (ec empty): 0.595005989075 > [raw row] 2nd fetch : 0.585139036179 > [raw row] 3rd fetch (objects already loaded): 0.607128024101 >=20 > However, still, we cannot avoid the additional payload when some > objects are modified, and the more modified objects we have, the > slower will be the fetch (first figure, 2.24, would be the upper > limit here, when all objects are modified). >=20 > Second, I do not want to commit this quicker implementation now, > because a problem remains: if the database has been changed in the > mean time, you can get raw rows whose value are not the same then > _unmodified_ objects in the EditingContext. I'm not sure if this is > a significant problem, or put it differently, if we have to pay for > extra cpu-time for ensuring that this does not happen. But I feel a > little touchy in making a exception in the general rule. Okay, I thought I solved this and committed it in cvs [DatabaseChannel.fetchObject() v1.15]. It gave the following figures, on the same basis (py2.2 -O / no psyco): [raw row] 1st fetch (ec empty): 0.60 [raw row] 2nd fetch : 0.59 [raw row] 3rd fetch (objects already loaded): 0.66 Alas, I tried it then with all objects modified, and it took... about 1 minute for 5000 objects. I thought testing if the object was modified was a good idea, and it was when no objects were modified. But when all 5000 objects are modified, looking in a list of len(5000) if an object is there takes avg. 2500 look-ups, hence 2500 calls for __eq__. For 5000 objects, that 5000*2500=3D12.5e6 calls to __eq__! So back to the old behaviour, and the following figures (py2.2 -O) [raw row] 1st fetch (ec empty): 0.522832036018 [raw row] 2nd fetch : 0.516697049141 [raw row] 3rd fetch (objects already loaded): 1.80115604401 [raw row] 4th fetch (all objects modified): 1.70906305313 I'll commit this soon. And I guess it's time for me to stop annoying you stop w/ all these performance considerations. -- S=E9bastien. |
From: Jerome K. <Jer...@fi...> - 2003-07-18 07:00:28
|
Sebastien Bigaret wrote: > >That's more meat for the forthcoming 'tuning performance' section in the >guide. As a general conclusion, py2.2 is faster than py2.1, and the '-O' >option is definitively worth the try. > =20 > Yep .. 2.2 is faster than 2.1 but the big performance boost is in 2.3. (I read this a couple of times over the net) . Another thing : Do you use if __debug__ in the Modeling (I don't have the code since i'm at work) because the most important thing w/ -0 is that it doesn't marshall the code inside if __debug__ so this can really be a big improvement. And if you read carefully the python cookbook you will find some really impressive performance boost while using some map.. and other in list or dict. But there is a drawback code isn't really eye candy after this tweak > BTW: who is using the framework w/ python2.1 alone? And with py2.1 and > (because of) zope? > >-- S=E9bastien. > > =20 > I don't |
From: Sebastien B. <sbi...@us...> - 2003-07-18 08:19:45
|
Jerome Kerdreux <Jer...@fi...> wrote: > Yep .. 2.2 is faster than 2.1 but the big performance boost is in > 2.3. (I read this a couple of times over the net) . While this is still experimental (py2.3 is not officially supported yet), you're definitely right: Python2.3 / +psyco ------------------ [std] 1st fetch : 4.89 / 3.57 [std] 2nd fetch : 0.73 / 0.64 Python2.3 -O / +psyco --------------------- [std] 1st fetch : 4.65 / 3.55 [std] 2nd fetch : 0.72 / 0.64 > Another thing : > Do you use if __debug__ in the Modeling (I don't have the code > since i'm at work) because the most important thing w/ -0 is that > it doesn't marshall the code inside if __debug__ so this can really > be a big improvement. No, I don't use this, thanks for the tip. > And if you read carefully the python cookbook you will find some > really impressive performance boost while using some map.. and > other in list or dict. But there is a drawback code isn't really > eye candy after this tweak I'll check that however. -- S=E9bastien. |
From: Yannick G. <ygi...@yg...> - 2003-07-18 10:35:31
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 18 July 2003 04:19, Sebastien Bigaret wrote: > > And if you read carefully the python cookbook you will find some > > really impressive performance boost while using some map.. and > > other in list or dict. But there is a drawback code isn't really > > eye candy after this tweak > > I'll check that however. Another trick is that Psyco is *much* more effective with the new style classes (derived from object). Since you want to keep Zope support it may be a problem but there is an other trick that most people ignore. The parent in a class declaration is an expression. So this is valid : class Object: pass def topLevelClass(): try: return object except NameError: return Object class MySuperFastClass(topLevelClass()): pass - -- Yannick Gingras Coder for OBB : Optimum Brawny Buspirone http://OpenBeatBox.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE/F1iXrhy5Fqn/MRARAoPeAKCSnbp5IvdDeHoFrrS7Ux8r7+520wCfTrgU dLN9tQ3/jMw2di2AT+8MKoo= =QwvZ -----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-18 11:01:15
|
Yannick Gingras <ygi...@yg...> wrote: > Another trick is that Psyco is *much* more effective with the new style > classes (derived from object). Since you want to keep Zope support it > may be a problem but there is an other trick that most people ignore. > The parent in a class declaration is an expression. So this is > valid : [snipped] Yes, thanks for pointing this out. This is documented there: http://psyco.sourceforge.net/psycoguide/metaclass.html http://psyco.sourceforge.net/psycoguide/node19.html#l2h-19 The nice thing here is that you do not even have to change your class declaration, just add 'from psyco.classe import *'. I still have to study the effect of changing to classes of the framework to new-style. However a quick try with this enabled an EditingContext, DatabaseContext and DatabaseChannel does not show any amelioration (perf. are in fact slightly worse). -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-18 13:26:24
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 18, 2003 07:00 am, Sebastien Bigaret wrote: > Yes, thanks for pointing this out. This is documented there: > http://psyco.sourceforge.net/psycoguide/metaclass.html > http://psyco.sourceforge.net/psycoguide/node19.html#l2h-19 > > The nice thing here is that you do not even have to change your class > declaration, just add 'from psyco.classe import *'. I still have to > study the effect of changing to classes of the framework to new-style. > > However a quick try with this enabled an EditingContext, > DatabaseContext and DatabaseChannel does not show any amelioration > (perf. are in fact slightly worse). My experience with Psyco shows that the code optimization does not always kick in on the 1st try. Enabling Psyco in profiling mode (which is unfortunately incompatible with the profiler in the standard library) will generate code optimization here and there even a few minutes after startup. As usual, I never experienced the kind of performance boost reported in the documentation. 40% faster max with OBB (http://openbeatbox.org) but since most of the hard work (raster op.)=20 is already done in C++ (PyQt) it was expected. =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/F/V9rhy5Fqn/MRARAq9BAKCVlVlzkLaf/mDZ3Nz9jlsBi2Yy2QCferdQ J8tb8gj/jpo221MTK7I35Rk=3D =3Du5qg =2D----END PGP SIGNATURE----- |