modeling-users Mailing List for Object-Relational Bridge for python (Page 32)
Status: Abandoned
Brought to you by:
sbigaret
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(19) |
Feb
(55) |
Mar
(54) |
Apr
(48) |
May
(41) |
Jun
(40) |
Jul
(156) |
Aug
(56) |
Sep
(90) |
Oct
(14) |
Nov
(41) |
Dec
(32) |
2004 |
Jan
(6) |
Feb
(57) |
Mar
(38) |
Apr
(23) |
May
(3) |
Jun
(40) |
Jul
(39) |
Aug
(82) |
Sep
(31) |
Oct
(14) |
Nov
|
Dec
(9) |
2005 |
Jan
|
Feb
(4) |
Mar
(13) |
Apr
|
May
(5) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2006 |
Jan
(1) |
Feb
(1) |
Mar
(9) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(5) |
Aug
|
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Yannick G. <yan...@sa...> - 2003-07-08 17:42:37
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 8, 2003 11:51 am, Sebastien Bigaret wrote: > > > Thanks and congratulation for this instant implementation ! > > > > > > : ) > > Okay, I'll add this in the next release then. Need to work on some more > unittests, but the patch itself for QualifierParser and SQLExpression > will probably remain as-is. Argh, I think we have a problem when there is only one elem in the choice list : <Fault 1: 'Modeling.Adaptor.GeneralAdaptorException:Couldn\'t evaluate=20 expression SELECT t0.id, t0.gl_id, t0.fs2_id FROM FSLINK t0 WHERE (t0.fs2_i= d=20 NOT IN (4,) AND t0.gl_id LIKE 1). Reason:=20 _mysql_exceptions.ProgrammingError:(1064, "You have an error in your SQL=20 syntax near \') AND t0.gl_id LIKE 1)\' at line 1")'> I don't think that : "NOT IN (4,)" is legal... > > Of course it should be documented that this kind of fetch is super > > simpler to build with: > > > > "val in %s" % repr(list(vallist)) > > > > repr() of a list spit out exactly the string you want. > > Agreed, but "val in %s"%list(vallist) is a bit shorted, isn't it ;) Your just too right ! > > BTW there is and option in mailman for the replies to go directly to > > the list instead of the sender. I think it would be nice to turn it > > on... > > Well, yes, but... I won't go into that discussion here :) I did not > turn it on because I mainly agree with sourceforge's policy (see > https://sourceforge.net/docman/display_doc.php?docid=3D6693&group_id=3D1 = and > the pointers it contains) After reading it, I must admit that it makes sense. =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/CwKLrhy5Fqn/MRARAl2yAJwLLbAutnsDR8YJB+AADyy7h/c88QCfQMlz tCljmOkVyM259UIIaunFl3s=3D =3Db/Fl =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-08 15:52:13
|
Yannick Gingras <yan...@sa...> wrote: > On July 8, 2003 10:11 am, Yannick Gingras wrote: > > > * The syntax really is '<attr> in/not in [ <v1>, <v2>, ...]', i.e. wi= th > > > square brackets, and it will remain as-is --as said in the docs I'm > > > not a grammar nor a spark expert, and it would take me too long to > > > allow parens '(v1, v2, ...)' here. > > > > This patch works like a charm ! > > > > : ) > > > > Thanks and congratulation for this instant implementation ! > > > > : ) Okay, I'll add this in the next release then. Need to work on some more unittests, but the patch itself for QualifierParser and SQLExpression will probably remain as-is. Any comment on lower-cased operators, anyone? The impact is mainly the following one: it increases the number of keywords that you cannot use as attributes' names in queries. To put it explicitely: you'll get a syntax error if one of your attributes in the qualifier strings is either 'AND', 'OR', NOT', 'like', 'ilike', well, you get the picture; by allowing upper/lower case versions of these operators, the list of reserved keywords is increased of a factor two. I do not think that anyone would use 'and' or 'not' as an attribute's name, but who knows... > Of course it should be documented that this kind of fetch is super > simpler to build with: >=20 > "val in %s" % repr(list(vallist)) >=20 > repr() of a list spit out exactly the string you want. Agreed, but "val in %s"%list(vallist) is a bit shorted, isn't it ;) > BTW there is and option in mailman for the replies to go directly to > the list instead of the sender. I think it would be nice to turn it > on... Well, yes, but... I won't go into that discussion here :) I did not turn it on because I mainly agree with sourceforge's policy (see https://sourceforge.net/docman/display_doc.php?docid=3D6693&group_id=3D1 and the pointers it contains) As for me, I've never been bitten by the nasty 'Oops I did not want to send that personal/offensive/mean/etc. material to the whole list', but I surely do not want it to happen! Of course it has already happened in the past that someone is replying to me by mistake rather than to the list, in these cases I notify the poster of its possible error and asks if the post could be forwarded to the list --it's not that frequent, and I can surely live with that. And it also happened that the original post was really directed to me, because for whatever reasons the original poster did not want its message to appear on the list. Hoping you won't be too upset by this answer ;) Regards, -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-08 14:26:54
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 8, 2003 10:11 am, Yannick Gingras wrote: > > * The syntax really is '<attr> in/not in [ <v1>, <v2>, ...]', i.e. with > > square brackets, and it will remain as-is --as said in the docs I'm > > not a grammar nor a spark expert, and it would take me too long to > > allow parens '(v1, v2, ...)' here. > > This patch works like a charm ! > > : ) > > Thanks and congratulation for this instant implementation ! > > : ) Of course it should be documented that this kind of fetch is super simpler to build with: "val in %s" % repr(list(vallist)) repr() of a list spit out exactly the string you want. BTW there is and option in mailman for the replies to go directly to the list instead of the sender. I think it would be nice to turn it on... =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/CtSsrhy5Fqn/MRARAlByAJwMgiY9+pjodJQ+jvEB4dPTzAObjgCeI6vu spifkPkxzfBlXMVj35SzgwM=3D =3DJggD =2D----END PGP SIGNATURE----- |
From: Yannick G. <yan...@sa...> - 2003-07-08 14:11:07
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On July 8, 2003 12:16 am, you wrote: > * Patch can be retrieved at: > > https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D767625&gro= up_id=3D >58 935&atid=3D489337 > > * The syntax really is '<attr> in/not in [ <v1>, <v2>, ...]', i.e. with > square brackets, and it will remain as-is --as said in the docs I'm > not a grammar nor a spark expert, and it would take me too long to > allow parens '(v1, v2, ...)' here. This patch works like a charm ! : ) Thanks and congratulation for this instant implementation ! : ) =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/CtD5rhy5Fqn/MRARAurGAJ9CSD64kzFprvswAEVURj29F2e39wCeOBTf nMeQPJKzB4d3PDN6JV2f9fs=3D =3DtChP =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-08 10:01:10
|
More notes on this: > I've submitted patch #767625 which makes it possible to do: > objects=3Dec.fetch('Writer', 'age in [24, 82]') > or: > objects=3Dec.fetch('Writer', 'age not in [24, 82]')=20 >=20 * Patch can be retrieved at: https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D767625&grou= p_id=3D58935&atid=3D489337 * The syntax really is '<attr> in/not in [ <v1>, <v2>, ...]', i.e. with square brackets, and it will remain as-is --as said in the docs I'm not a grammar nor a spark expert, and it would take me too long to allow parens '(v1, v2, ...)' here. -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-07-08 09:10:13
|
Yannick Gingras <yan...@sa...> wrote: > Hi,=20 > Is there a way to make a fetch the same way you do with the SQL "in" ? > ex: "status in [1, 2, 5, 23]" Not currently... > It is not that hard to make something like :=20 >=20=20=20 > qryLines =3D [] > for status in statuses: > qryLines.append("( status =3D=3D %d)" % status) > qry =3D " OR ".join(qryLines) Right. > but it look like a hack to me and is probably really inefficient for > large lists since spark and the DBMS will have quite a ugly string to > parse. Right again ;) I've submitted patch #767625 which makes it possible to do: objects=3Dec.fetch('Writer', 'age in [24, 82]') or: objects=3Dec.fetch('Writer', 'age not in [24, 82]')=20 Please test and report (I've only done partial testing, on integers only); if it's okay, I'll probably integrate it into the next release. Note: this patch also relaxes the constraint for operators AND, OR and NOT to be upper-case: w/ this patch they can be lower-cased as well (and, or, not). What do you think, all? Should we adopt this as well? (In this case, I guess we'll need to do the same for 'like'/'LIKE' and 'ilike'/'ILIKE') Regards, -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-07-07 16:10:24
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi,=20 Is there a way to make a fetch the same way you do with the SQL "in" ? ex: "status in [1, 2, 5, 23]" It is not that hard to make something like :=20 =20 qryLines =3D [] for status in statuses: qryLines.append("( status =3D=3D %d)" % status) qry =3D " OR ".join(qryLines) but it look like a hack to me and is probably really inefficient for large lists since spark and the DBMS will have quite a ugly string to parse. =20 =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE/CZtsrhy5Fqn/MRARAjv2AJ0U9yhkMoVr7I78mmLwXZgbV4WgUgCghp5N DdNV1VCDc5k7JkCCEsWyKTU=3D =3Dvy0y =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-07-04 18:18:09
|
Hi all, Release 0.9pre9 has just been released. It introduces the API change we discussed here. Other changes: - SQLite was added to the list of supported databases - Dependency to 4Suite has been removed. - Postgresql adaptor now accepts the TEXT datatype Please refer to the full changelog (below) for details on fixed bugs. =20 Along with these changes, the project's home page and the User's Guide have been updated. I really spent some time on the User's Guide to improve it, hopefully it now looks better. Enjoy ;) -- S=E9bastien. ------------------------------------------------------------------------ 0.9-pre-9 (2003/07/04) ---------------------- * API change (see mailing-list archives): Added EditingContext.insert(), delete(), fetch(), fetchCount(), autoInsertion(), setAutoInsertion(). Added CustomObject.globalID(). Added KeyValueCoding.valuesForKeys(). Deprecated methods KeyValueCoding.setValueForKey(), setValueForKeyPath(= ), setStoredValueForKey() (will be removed in v0.9.1) * Added SortOrdering.sortOrderingsWithString() * documentation: added Project's status page, and contributors. Moved the section on KeyValueCoding and RelationshipManipulation to an 'advanced techniques' part. Added raw material about fetching and inheritance. Updated to reflect the API change. * Fixed: ModelValidation does not issue an error anymore when a model contains a CHAR/VARCHAR field with no width set, as long as the underly= ing adaptor supports it (varchar w/o width are valid for: Postgresql, SQLit= e), as Ernesto Revilla suggested it. Instead, it produces a message at the INFO level. * Fixed bug #757181: under python2.2 AccessArrayFaultHandler was triggered every time the array is accessed, instead of being fired only once. * Fixed: adaptorModel() could raise instead of returning None when model's adaptorName is not set * Fixed: PostgresqlSQLExpression was not correctly escaping '%' (postgres= ql interprets the backslash char just like python does, hence escaping '%' requires a double backslash in the SQL query) * Fixed bug #753147: fetching twice or more with a given FetchSpecificati= on did not return the same result set (the original FetchSpec was modified) * New adaptor layer for SQLite * Fixed Entity.externalNameForInternalName(): when used with names containing figures (such as i 'db2Id') externalNameForInternalName(nameForExternalName()) was not idempotent Applied a patch submitted by Yannick Gingras. Thanks! * REMOVED dependency for 4Suite (was in 0.9pre8, forgot to announce) * 'TEXT' field now accepts a width to be set (ignored when generating the database schema, but checked, if set, when validating an attribute's value) * Added 'TEXT' as a valid sql datatype for the Postgresql adaptor * Added keyword 'ilike', short for 'caseInsensitiveLike', for qualifier string definition * Fixed: the User's Guide as pdf failed to print for acrobat reader v4+, because of hyperlinks. Thanks to Ernesto Revilla for reporting and identifying the problem. ------------------------------------------------------------------------ |
From: Sebastien B. <sbi...@us...> - 2003-07-02 21:59:03
|
Hi, Mario Ruggier <ma...@ru...> wrote: > On mercredi, juil 2, 2003, at 11:05 Europe/Amsterdam, Jerome Kerdreux wro= te: > > Mario Ruggier wrote: > > > > A general comment is about the "new" method being an alias for the "old" > > method (this may only be a lingo issue). It seems sensible to me to give > > the "new" or preferred method the code, and other methods that need to = be > > supported for bw compatibility are the aliases (plus an appropriate > > docstring). This also helps clarify which methods are to be standardiz= ed > > on. > > Hum .. A general note about aliasing methods: In fact > > I use a couple of tools for test (SQLObject / MiddleKit/ > > and Modeling) so as MiddleKit and Modeling try to > > monkey EOF, it use the same method names like > > insertObject() .. so for me staying closer to EOF is a good > > thing . >=20 > That would be fine. My point was that whatever method > name is preferred, then this should be the "master" and > others that do same thing are aliases. This is irrespective of > whether the preferred method name is the same as in the > EOF or not. I see your points here, both; I have no definitive answer, my point of view is a bit different. - real aliases, i.e. methods w/ different name but w/ the very same functionality as the aliased methods (insert(), delete(), setAutoInsertion), all these have in my mind the very same status. In the code point of view, they'll be made by simple assignments: 'insert=3DinsertObject'; they'll actually *be* the very same method, sharing code and docstrings. In a developer's point of view, I guess the preference will go here or there; if you're used to sql, 'insert' will probably be natural, while 'insertObject' will come up more easily if you already know SQLObject or the EOF. So I won't say there will be a preference in itself between these, only a personal preference. Even if they appear in different circumstances, historical or at users' request, they are, and probably will remain, identical. - Framework's enhancements: fetch(), fetchCount() These are not just aliases: you can think of the fa=E7ade design pattern. They alleviate the developer's work by offering a simpler and handier API to fetch objects. Here again, I won't say that there will be a preferred one in the absolute. fetch() is a real enhancement and I'm glad it's showing up, because having to import modules, then instanciate a Qualifier and a FetchSpecification just to fetch some objects is just more painful than simply call fetch with the needed parameters. However, the "old way" remains meaningful --and different, because it offers plain control over the different objects collaborating within the framework at some point (here, when fetching). It will eventually be moved into an 'advanced techniques' section, of course, and handier methods will be introduced first; after all, the framework is supposed to make it easy to access to objects stored in a db! An other slighty-different question to which Mario was probably referring to is what are the methods you need to know at first sight, and what are those that you can safely ignore if you do not want to fine-tune the framework, or if you don't have time to dive too deeply into it. This is an important issue I'm aware of. I already said here that I'm looking for a solution to distinguish the one from the others in the documentation. I haven't spent on it, still, but this *will* be addressed in the future. Sorry, I leave the other interesting points pending --mostly because they are out of the scope of the actual API changes, and because I need some rest now ;) But we'll get back to these for sure, and maybe Soif could elaborate a bit further on his CustomSearch example? It would be interesting to see how this compares to the other approaches we already discussed here. Cheers, -- S=E9bastien. |
From: Jerome K. <Jer...@fi...> - 2003-07-02 11:20:09
|
Mario Ruggier wrote: Hum .. A general note about aliasing methods: In fact I use a couple of tools for test (SQLObject / MiddleKit/ and Modeling) so as MiddleKit and Modeling try to monkey EOF, it use the same method names like insertObject() .. so for me staying closer to EOF is a good thing . >> - make fetchCount() an alias for objectsCountWithFetchSpecification() >> [same API as fetch()] > > > Hmmn, I would not introduce such a specialized method (will become > clutter even in the short term). As suggested earlier, it may be more > flexible to just have a fetchSql() open method for now (that is not > necessarily db-independent) and that returns whatever the dbapi > returns. For me nothing is gained by having fetchCount('customobjname') > over fetchSql('select count(*) from tblname') -- it is jn fact less > flexibe. > When the time comes to generailize this to be db-independant, either > the fetch() method will gain new parameters to allow this, or other > generic methods introduced, such as fetchRaw, fetchTuples, fetchDicts, > or whatever. fetchSql can always have a use, though, so it it not likely > to become clutter. I guess you miss something Mario. Let's take a example . qualifier=qualifierWithQualifierFormat('lastName=="Hugo"') fetchSpec=FetchSpecification(entityName='Writer', qualifier=qualifier) fetchCount(fetchSpec) will return the number of writer with lastname = 'Hugo' .. and so on this is not so trivial as a simple 'select count(*) ..' so the answer is no : fetchCount() and fetchSql() haven't the same purpose, even is you can use fetSQL to do a fetchCount() or anything else. And i really like the OO approach of modeling, and i really think that even fetchSQL() ( which for me should be fetchSQLObject() ) should alway return a object. Take a look at this: - in the model you describe the object mapping (which is not a real object but only a "result object" . for example: CustomSearch - searchString() - books relation one-> many - ... - I thinks we should find a way to do ec.objectsWithFetchSpecification('CustomSearch, ..searchString = 'Toto') .. and so provide a method to do some special SQL in the CustomSearch() which return CustomSearch object. But this way you can do something like search = CustomSearch(...) books = search.getBooks() ( which return some books object .. not raw SQL) >Future enhancements >------------------- > Interesting future features (NOT included in this proposal) have been >proposed in this thread, including: >- fetchSQL(): the possibility to pass a raw sql query to build > objects, To build Objects or raw SQL ? Perhaps we can take a look on how SQLObject do that ? > - the ability to fetch raw rows (i.e. raw dict) rather than > fully-initialized objects, yeah .. Bye Bye Bye Bye .. |
From: Mario R. <ma...@ru...> - 2003-07-02 09:35:03
|
On mercredi, juil 2, 2003, at 11:05 Europe/Amsterdam, Jerome Kerdreux wrote: > Mario Ruggier wrote: > > > Hum .. A general note about aliasing methods: In fact > I use a couple of tools for test (SQLObject / MiddleKit/ > and Modeling) so as MiddleKit and Modeling try to > monkey EOF, it use the same method names like > insertObject() .. so for me staying closer to EOF is a good > thing . That would be fine. My point was that whatever method name is preferred, then this should be the "master" and others that do same thing are aliases. This is irrespective of whether the preferred method name is the same as in the EOF or not. >>> - make fetchCount() an alias for >>> objectsCountWithFetchSpecification() >>> [same API as fetch()] >> >> >> Hmmn, I would not introduce such a specialized method (will become >> clutter even in the short term). As suggested earlier, it may be more >> flexible to just have a fetchSql() open method for now (that is not >> necessarily db-independent) and that returns whatever the dbapi >> returns. For me nothing is gained by having >> fetchCount('customobjname') >> over fetchSql('select count(*) from tblname') -- it is jn fact less >> flexibe. >> When the time comes to generailize this to be db-independant, either >> the fetch() method will gain new parameters to allow this, or other >> generic methods introduced, such as fetchRaw, fetchTuples, fetchDicts, >> or whatever. fetchSql can always have a use, though, so it it not >> likely >> to become clutter. > > > I guess you miss something Mario. > Let's take a example . > qualifier=qualifierWithQualifierFormat('lastName=="Hugo"') > fetchSpec=FetchSpecification(entityName='Writer', qualifier=qualifier) > > fetchCount(fetchSpec) will return the number of writer with lastname = > 'Hugo' .. > and so on this is not so trivial as a simple 'select count(*) ..' > > so the answer is no : fetchCount() and fetchSql() haven't the same > purpose, even > is you can use fetSQL to do a fetchCount() or anything else. Fair enough. fetchSQL is an open-ended convenience hack, that allows more time to think over what a more generalized API for fetch() could be -- so that it would handle in an OO way queries such as qualified counts, sums, etc. I suspect that defining a fetchCount will lead to other unnecessary definitions of other "dedicated" methods (that could all be addressed with a well-thought out, and simpler, fetch+ interface). > And i really like the OO approach of modeling, and i really think that > even fetchSQL() ( which for me should be fetchSQLObject() ) should > alway return a object. Yes, agreed. To achieve this, there needs to be a way to allow "generic custom objects" to be returned by fetches, i.e. objects that do not necessarily correspond to a class defined in a model (such as for count, sum, ...), or, as you nicely describe below, object that 'is not a real object but only a "result object"'. > > Take a look at this: > - in the model you describe the object mapping (which is not a real > object > but only a "result object" . for example: > CustomSearch > - searchString() > - books relation one-> many > - ... > - I thinks we should find a way to do > ec.objectsWithFetchSpecification('CustomSearch, ..searchString = > 'Toto') .. > and so provide a method to do some special SQL in the CustomSearch() > which > return CustomSearch object. > > But this way you can do something like > search = CustomSearch(...) > books = search.getBooks() ( which return some books object .. not raw > SQL) Absolutely. It would be wonderful to have fetch API also be able to handle this. The question is how to generalize the API to achieve all these features (plus be easily extensible) while keeping it as simple as possible. mario |
From: Jerome K. <Jer...@fi...> - 2003-07-02 09:22:39
|
Mario Ruggier wrote: Hum .. A general note about aliasing methods: In fact I use a couple of tools for test (SQLObject / MiddleKit/ and Modeling) so as MiddleKit and Modeling try to monkey EOF, it use the same method names like insertObject() .. so for me staying closer to EOF is a good thing . >> - make fetchCount() an alias for objectsCountWithFetchSpecification() >> [same API as fetch()] > > > Hmmn, I would not introduce such a specialized method (will become > clutter even in the short term). As suggested earlier, it may be more > flexible to just have a fetchSql() open method for now (that is not > necessarily db-independent) and that returns whatever the dbapi > returns. For me nothing is gained by having fetchCount('customobjname') > over fetchSql('select count(*) from tblname') -- it is jn fact less > flexibe. > When the time comes to generailize this to be db-independant, either > the fetch() method will gain new parameters to allow this, or other > generic methods introduced, such as fetchRaw, fetchTuples, fetchDicts, > or whatever. fetchSql can always have a use, though, so it it not likely > to become clutter. I guess you miss something Mario. Let's take a example . qualifier=qualifierWithQualifierFormat('lastName=="Hugo"') fetchSpec=FetchSpecification(entityName='Writer', qualifier=qualifier) fetchCount(fetchSpec) will return the number of writer with lastname = 'Hugo' .. and so on this is not so trivial as a simple 'select count(*) ..' so the answer is no : fetchCount() and fetchSql() haven't the same purpose, even is you can use fetSQL to do a fetchCount() or anything else. And i really like the OO approach of modeling, and i really think that even fetchSQL() ( which for me should be fetchSQLObject() ) should alway return a object. Take a look at this: - in the model you describe the object mapping (which is not a real object but only a "result object" . for example: CustomSearch - searchString() - books relation one-> many - ... - I thinks we should find a way to do ec.objectsWithFetchSpecification('CustomSearch, ..searchString = 'Toto') .. and so provide a method to do some special SQL in the CustomSearch() which return CustomSearch object. But this way you can do something like search = CustomSearch(...) books = search.getBooks() ( which return some books object .. not raw SQL) >> Future enhancements >> ------------------- >> >> Interesting future features (NOT included in this proposal) have been >> proposed in this thread, including: >> >> - fetchSQL(): the possibility to pass a raw sql query to build >> objects, > To build Objects or raw SQL ? Perhaps we can take a look on how SQLObject do that ? >> >> - the ability to fetch raw rows (i.e. raw dict) rather than >> fully-initialized objects, >> yeah .. Bye Bye .. |
From: Mario R. <ma...@ru...> - 2003-07-02 08:06:12
|
Hi, sorry about this reply being so late... I agree with most of what is summarized below, but have a few comments. A general comment is about the "new" method being an alias for the "old" method (this may only be a lingo issue). It seems sensible to me to give the "new" or preferred method the code, and other methods that need to be supported for bw compatibility are the aliases (plus an appropriate docstring). This also helps clarify which methods are to be standardized on. On mardi, juin 24, 2003, at 11:43 Europe/Amsterdam, Sebastien Bigaret=20 wrote: > > Hi, > > Time to summarize the changes we discussed here: > > EditingContext > -------------- > > - insert() is an alias for insertObject() > > - delete() is an alias for deleteObject() > > - fetching: remove the need to import FetchSpecification and > Qualifier, and propose an alternate for > objectsWithFetchSpecification() > > def fetch(self, entityName, > qualifier=3DNone, # either a Qualifier instance or a=20= > string > isDeep=3D0, # should subentities be fetched as=20= > well? > ) > > Note that parameters orderBy, limit/page/offset, lock and rawRows > have been removed since they are unsupported yet (they will be > introduced when support is available). OK. > - make fetchCount() an alias for = objectsCountWithFetchSpecification() > [same API as fetch()] Hmmn, I would not introduce such a specialized method (will become clutter even in the short term). As suggested earlier, it may be more flexible to just have a fetchSql() open method for now (that is not necessarily db-independent) and that returns whatever the dbapi returns. For me nothing is gained by having fetchCount('customobjname') over fetchSql('select count(*) from tblname') -- it is jn fact less=20 flexibe. When the time comes to generailize this to be db-independant, either the fetch() method will gain new parameters to allow this, or other generic methods introduced, such as fetchRaw, fetchTuples, fetchDicts, or whatever. fetchSql can always have a use, though, so it it not likely to become clutter. > - make (set)autoInsertion() aliases for > = (set)propagatesInsertionForRelatedObjects() OK > CustomObject > ------------ > > - add globalID() Yes. The scope of gloabl here is the EditingContext (and children) or all EditingContexts in this python session, or the current model, the db, ... > > KeyValueCoding > -------------- > > - deprecate methods setValueForKey(), setValueForKeyPath() and > setStoredValueForKey() > > I propose to set the removal time for these deprecated methods at > version 0.9.1 OK. > - add valuesForKeys(), counterpart for takeValuesFromDictionary() > > Additionally, I propose to move the chapter dealing with KVC to an > other part in the User's Guide, in some 'advanced techniques'=20 > chapter, > so that it is not exposed as it is today. OK, we can discuss this more (where to move this to in the guide). > Validation > ---------- > > No changes (because of backward compatibility issues, mainly) > > > About the ``new'' KVC module: > ----------------------------- > > Do we agree that it can be separatedly defined so that users can use > it as a mix-in for their classes, at their will? If so, does anyone > want to take the lead for this? (meaning at least continuing the > discussion and coming to a decision) I was under the impression that you said it will **not** be a mix-in=20 (as per: http://sourceforge.net/mailarchive/message.php?msg_id=3D5250043 ). The interface described in the archive message above is OK for me. However, if it is to be a mix-in, discussion on this can continue=20 independently of these changes? In any case, I will be happy to continue to be a part=20= of this discussion. > Future enhancements > ------------------- > > Interesting future features (NOT included in this proposal) have = been > proposed in this thread, including: > > - fetchSQL(): the possibility to pass a raw sql query to build > objects, > > - the ability to fetch raw rows (i.e. raw dict) rather than > fully-initialized objects, > > - fetchSummary(): adds access to sum()/avg()/... and group=20 > by/having > to the fetching API > > We'll probably need to discuss some of these points a bit more, > however if you think you'll need one or more of these for the coming > releases please fill in a RFE on sourceforge's page and announce it > here --this is definitely the best way to make it happen sooner. OK, except for fetchSQL. I think this should be not so difficult to add=20= as a generic method, and will take off the pressure on other premature additions. > I'd like to validate the API change proposal for the end of the week,=20= > if > possible, so I'd appreciate if you could comment on this in the coming > days. And BTW thanks a lot for the attention you already paid to this > topic. > > > -- S=E9bastien. Also, what about the renaming of =20 addObjectToBothSidesOfRelationshipWithKey() and removeObjectFromBothSidesOfRelationshipWithKey(), respectively with addViaRelation() and removeViaRelation() ? And, yes I think RelationshipManipulation should be combined with KVC (or whatever this is renamed to -- how about "Access" module within a CustomObject sub-package ?) mario |
From: Sebastien B. <sbi...@us...> - 2003-07-01 19:21:21
|
Hi all, Got no answer for this; I guess this has already been enough discussed. Since the proposed changes is a subset of the initial one that everybody aggreed on, I'll apply them on thursday evening (ie in two days) --unless somebody comments further on this. I'll then make a new release. -- S=E9bastien. Sebastien Bigaret <sbi...@us...> writes: > Hi, >=20 > Time to summarize the changes we discussed here: >=20 > EditingContext > -------------- >=20 > - insert() is an alias for insertObject() >=20 > - delete() is an alias for deleteObject() >=20 > - fetching: remove the need to import FetchSpecification and > Qualifier, and propose an alternate for > objectsWithFetchSpecification() >=20 > def fetch(self, entityName, > qualifier=3DNone, # either a Qualifier instance or a str= ing > isDeep=3D0, # should subentities be fetched as wel= l? > ) >=20 > Note that parameters orderBy, limit/page/offset, lock and rawRows > have been removed since they are unsupported yet (they will be > introduced when support is available). >=20 > - make fetchCount() an alias for objectsCountWithFetchSpecification() > [same API as fetch()] >=20 > - make (set)autoInsertion() aliases for > (set)propagatesInsertionForRelatedObjects() >=20=20=20=20=20 >=20 > CustomObject > ------------ >=20 > - add globalID() >=20 > KeyValueCoding > -------------- >=20 > - deprecate methods setValueForKey(), setValueForKeyPath() and > setStoredValueForKey() >=20 > I propose to set the removal time for these deprecated methods at > version 0.9.1 >=20 > - add valuesForKeys(), counterpart for takeValuesFromDictionary() >=20 > Additionally, I propose to move the chapter dealing with KVC to an > other part in the User's Guide, in some 'advanced techniques' chapter, > so that it is not exposed as it is today. >=20 > Validation > ---------- >=20 > No changes (because of backward compatibility issues, mainly) >=20 >=20 > About the ``new'' KVC module: > ----------------------------- >=20 > Do we agree that it can be separatedly defined so that users can use > it as a mix-in for their classes, at their will? If so, does anyone > want to take the lead for this? (meaning at least continuing the > discussion and coming to a decision) >=20 >=20 > Future enhancements > ------------------- >=20 > Interesting future features (NOT included in this proposal) have been > proposed in this thread, including: >=20 > - fetchSQL(): the possibility to pass a raw sql query to build > objects, >=20 > - the ability to fetch raw rows (i.e. raw dict) rather than > fully-initialized objects, >=20 > - fetchSummary(): adds access to sum()/avg()/... and group by/having > to the fetching API >=20 > We'll probably need to discuss some of these points a bit more, > however if you think you'll need one or more of these for the coming > releases please fill in a RFE on sourceforge's page and announce it > here --this is definitely the best way to make it happen sooner. >=20 >=20 > I'd like to validate the API change proposal for the end of the week, if > possible, so I'd appreciate if you could comment on this in the coming > days. And BTW thanks a lot for the attention you already paid to this > topic. >=20 >=20 > -- S=E9bastien. >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: INetU > Attention Web Developers & Consultants: Become An INetU Hosting Partner. > Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! > INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php > _______________________________________________ > Modeling-users mailing list > Mod...@li... > https://lists.sourceforge.net/lists/listinfo/modeling-users |
From: Sebastien B. <sbi...@us...> - 2003-06-24 09:58:23
|
Just a quick note to inform you that the psycopg problem and the fix have been validated by its maintainer, and it has been integrated into the recent release. You'll probably want to upgrade your local copy of psycopg with version 1.1.5.1. -- S=E9bastien. Sebastien Bigaret <sbi...@us...> writes: > Remember I wrote on Sat Feb 22 2003: > > [...] > > Important (I almost forgot, silly me): if you're using the framework in > > --------- a long-standing process (being alive for more than 24 hours), > > I strongly suggest that you upgrade your copy and that you do > > NOT set MDL_PERMANENT_DB_CONNECTION (see changelog below) ; I > > have experienced strange behaviour with long-standing > > processes (w/ python2.1 & Zope 2.6, psycopg v1.0.14 and > > 1.0.15, and PG server 7.2), such as committed changes being > > correctly issued by the psycopg adaptor but never committed > > within the postgresql server, hence causing the corresponding > > transaction to be *lost* (when this happens, you see a > > postgresql 'idle transaction in progress', never ending until > > the python process itself dies) > >=20 > > While the exact reason for this problem is not completely clear at the > > moment, it was solved by the new behaviour (which closes the > > connection to the db as soon as no more adaptor channels are opened). >=20 > Current status: even with psycopg 1.1.4 and MDL_PERMANENT_DB_CONNECTION > unset, the problem still appears on one project using modeling and zope > (despite what I wrote at that time, saying it could be resolved). >=20 > This *only* affects modeling/psycopg in a multi-threaded environment > (such as with zope). >=20 > Other users also reported the problem, for example: > http://lists.initd.org/pipermail/psycopg/2002-August/001308.html > http://lists.initd.org/pipermail/psycopg/2003-June/002079.html > My initial report is at: > http://lists.initd.org/pipermail/psycopg/2003-March/thread.html#1885 >=20 > A few days ago I worked again on this and I may have found the reason > for this, along with a test prg. reproducing the pb. and a possible > correction; I'm still waiting for comments from the psycopg > mailing-list. May it be of some interest to you, you'll find my analysis > and proposals at: > http://lists.initd.org/pipermail/psycopg/2003-June/002086.html >=20 > In brief: until an other solution is validated by the psycopg guys, I > just removed the Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS > statements in cursor.c:_psyco_curs_execute() and the problem seems to > disappear (note that by doing this, if you have some sql queries taking > a long time to execute, all the other threads will be *blocked* until the > query finishes). >=20 > Please keep in mind that for the moment being this is /my/ analysis > and definitely not an authoritative answer to the bug. >=20 > I'll report here when additional infos are available. I've thought you'd > appreciate to be kept informed. >=20 > -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-06-24 09:44:36
|
Hi, Time to summarize the changes we discussed here: EditingContext -------------- - insert() is an alias for insertObject() - delete() is an alias for deleteObject() - fetching: remove the need to import FetchSpecification and Qualifier, and propose an alternate for objectsWithFetchSpecification() def fetch(self, entityName, qualifier=3DNone, # either a Qualifier instance or a string isDeep=3D0, # should subentities be fetched as well? ) Note that parameters orderBy, limit/page/offset, lock and rawRows have been removed since they are unsupported yet (they will be introduced when support is available). - make fetchCount() an alias for objectsCountWithFetchSpecification() [same API as fetch()] - make (set)autoInsertion() aliases for (set)propagatesInsertionForRelatedObjects() =20=20=20=20 CustomObject ------------ - add globalID() KeyValueCoding -------------- - deprecate methods setValueForKey(), setValueForKeyPath() and setStoredValueForKey() I propose to set the removal time for these deprecated methods at version 0.9.1 - add valuesForKeys(), counterpart for takeValuesFromDictionary() Additionally, I propose to move the chapter dealing with KVC to an other part in the User's Guide, in some 'advanced techniques' chapter, so that it is not exposed as it is today. Validation ---------- No changes (because of backward compatibility issues, mainly) About the ``new'' KVC module: ----------------------------- Do we agree that it can be separatedly defined so that users can use it as a mix-in for their classes, at their will? If so, does anyone want to take the lead for this? (meaning at least continuing the discussion and coming to a decision) Future enhancements ------------------- Interesting future features (NOT included in this proposal) have been proposed in this thread, including: - fetchSQL(): the possibility to pass a raw sql query to build objects, - the ability to fetch raw rows (i.e. raw dict) rather than fully-initialized objects, - fetchSummary(): adds access to sum()/avg()/... and group by/having to the fetching API We'll probably need to discuss some of these points a bit more, however if you think you'll need one or more of these for the coming releases please fill in a RFE on sourceforge's page and announce it here --this is definitely the best way to make it happen sooner. I'd like to validate the API change proposal for the end of the week, if possible, so I'd appreciate if you could comment on this in the coming days. And BTW thanks a lot for the attention you already paid to this topic. -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-06-24 09:01:08
|
Hi, This has been changed in CVS and will be in next release: a CHAR/VARCHAR = w/o width no longer produces a message at the ERROR level, rather at the INFO level, when the underlying db supports it (i.e. Postgresql, SQLite). -- S=E9bastien. Sebastien Bigaret <sbi...@us...> wrote: > Ernesto Revilla <er...@si...> wrote: > > Going thru the PostgreSQL doc version 7.3.2 it says: > > * varchar w/o upper limit is not standard SQL > > * text is postgres specific > > * varchar w/o limit and text have no performance penalities compared wi= th > > varchar with limit > > * a lot of modern dbms systems support unlimited text. > >=20 > > What's about mySQL? I would like to see if the the width requirement of > > varchar would be loosened (perhaps a warning, or ignored). I don't know= how > > other database system treat this. >=20 > MySQL: http://www.mysql.com/doc/en/CHAR.html, no CHAR/VARCHAR w/o width > SQLite: accepts varchar without limit >=20 > ModelValidation issues an error when a varchar column is declared > without a width, however that does not prevent you to > mdl_generate_DB_schema: if the db underneath supports it, everything > will be fine :) >=20 > I agree, however, that ModelValidation should reflect the db > specificities when possible. This will be changed in a near future so > that it issues a message at the 'info' level (rather than 'error') for > rdbms supporting it. >=20 > See also the note I recently added to the User's Guide about the > 'width' parameter (will be online for the next release), depicting the > role played by 'width' at runtime: >=20 > << Note: The field width is used at runtime when validating a string > attribute's value (see 3.4, ``Validation''), before objects are saved > in the database. SQL data types CHAR and VARCHAR require that width > is specified; on the other hand, TEXT -when supported by the > database- does not accept that width is set. If you set it on an > attribute, it will be ignored when the database schema is generated > (i.e. a TEXT field my_text with width=3D30 won't be declared as > TEXT(30), but simply as TEXT), but it will be checked at runtime and > validation for this attribute will fail if its value's length exceeds > the given width. Note that the same goal can be achieved by writing a > specific validation method for the attribute (see 3.4.2). > >> >=20 > -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-06-19 18:55:27
|
Yannick Gingras <yan...@sa...> wrote: > On June 19, 2003 07:33 am, you wrote: > > Thanks for reporting. I still need to check why this was not detected by > > the unittests, then I'll put in this in CVS and integrate the fix in the > > next release. >=20 > Thanks for the super fast answer ! No problem! FYI The fix has been checked-in the main trunk this afternoon (BTW one of the unittests was supposed to check that but the test itself was buggy, cf.test_EC_Global.test_03_toManyFaultTrigger). I'll be off for a couple of days, I'll comment on the other posts about the API changes and related points when I'm back --monday or tuesday. Hopefully we'll get that stuff done for the end of next week. Regards, -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-06-19 13:34:17
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On June 19, 2003 07:33 am, you wrote: > Thanks for reporting. I still need to check why this was not detected by > the unittests, then I'll put in this in CVS and integrate the fix in the > next release. Thanks for the super fast answer ! =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE+8bvXrhy5Fqn/MRARAo+IAKCMM1WRLhsQDex0eSjwWb7rgVZipQCeI4s5 DSt0TF8oEmJ1Nz15qz6zI1k=3D =3DNtHs =2D----END PGP SIGNATURE----- |
From: Sebastien B. <sbi...@us...> - 2003-06-19 12:33:49
|
Okay, found it, apply this patch: ------------------------------------------------------------------------ diff -u -r1.13 FaultHandler.py --- FaultHandler.py 14 Mar 2003 11:40:08 -0000 1.13 +++ FaultHandler.py 19 Jun 2003 12:20:39 -0000 @@ -452,7 +452,8 @@ if sys.version_info >=3D (2, 2): # python v2.2 and higher **UNTESTED** def __getattr__(self, name): if name in self.__sequenceTypeMethods: - self.completeInitializationOfObject(self.__object()) + if not self.__list: + self.completeInitializationOfObject(self.__object()) #trace(self.__object().storedValueForKey(self._relationshipName)) =20=20=20=20=20=20=20=20=20 return getattr(self.__object().storedValueForKey(self._relationshi= pName), name) ------------------------------------------------------------------------ Thanks for reporting. I still need to check why this was not detected by the unittests, then I'll put in this in CVS and integrate the fix in the next release. Detailed explanation: __iter__ did trigger the fault as expected, and further __getitem__ also did without noticing that the fault has already been fired. -- S=E9bastien. > Sebastien Bigaret <sbi...@us...> wrote: >=20 > > Yannick Gingras <yan...@sa...> wrote: > > > Hi,=20 > > > As I understand (looking at my bandwidth monitor), to-many relations > > > are lazily retrieved. > > >=20 > > > ex:=20 > > > myBooks =3D meAsAnAuthor.getBooks() # does not fetch anything=20 > > > for book in myBooks: # fetch a record from the DB each loop > > > print book.getTitle() > > >=20 > > > Is there a way to have to fetch the complete meAsAnAuthor with all > > > it's books once to avoid the connection latency of a fetch for each > > > book ? > >=20 > > Could you be more specific? For example, by setting > > MDL_ENABLE_DATABASE_LOGGING and reporting the fetch you see for each > > loop? The framework normally fetches all the books when the array is > > first accessed, and if you find an exception to this rule this is > > definitely a bug. Here, accessing the array is first done when > > iterating on it in the for statement. > [...] >=20 > Oh well, you're right, the framework misbehaves with py2.2 (my initial > check was against py2.1) >=20 > I have submitted bug item #757181 and will have a look at it. >=20 > -- S=E9bastien. >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: INetU > Attention Web Developers & Consultants: Become An INetU Hosting Partner. > Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission! > INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php > _______________________________________________ > Modeling-users mailing list > Mod...@li... > https://lists.sourceforge.net/lists/listinfo/modeling-users |
From: Sebastien B. <sbi...@us...> - 2003-06-19 12:17:42
|
Sebastien Bigaret <sbi...@us...> wrote: > Yannick Gingras <yan...@sa...> wrote: > > Hi,=20 > > As I understand (looking at my bandwidth monitor), to-many relations > > are lazily retrieved. > >=20 > > ex:=20 > > myBooks =3D meAsAnAuthor.getBooks() # does not fetch anything=20 > > for book in myBooks: # fetch a record from the DB each loop > > print book.getTitle() > >=20 > > Is there a way to have to fetch the complete meAsAnAuthor with all > > it's books once to avoid the connection latency of a fetch for each > > book ? >=20 > Could you be more specific? For example, by setting > MDL_ENABLE_DATABASE_LOGGING and reporting the fetch you see for each > loop? The framework normally fetches all the books when the array is > first accessed, and if you find an exception to this rule this is > definitely a bug. Here, accessing the array is first done when > iterating on it in the for statement. [...] Oh well, you're right, the framework misbehaves with py2.2 (my initial check was against py2.1) I have submitted bug item #757181 and will have a look at it. -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-06-19 11:57:00
|
Yannick Gingras <yan...@sa...> wrote: > Hi,=20 > As I understand (looking at my bandwidth monitor), to-many relations > are lazily retrieved. >=20 > ex:=20 > myBooks =3D meAsAnAuthor.getBooks() # does not fetch anything=20 > for book in myBooks: # fetch a record from the DB each loop > print book.getTitle() >=20 > Is there a way to have to fetch the complete meAsAnAuthor with all > it's books once to avoid the connection latency of a fetch for each > book ? Could you be more specific? For example, by setting MDL_ENABLE_DATABASE_LOGGING and reporting the fetch you see for each loop? = The framework normally fetches all the books when the array is first accessed, = and if you find an exception to this rule this is definitely a bug. Here, accessing the array is first done when iterating on it in the for statement. This is what I expect (and actually get here) for this code: ec=3DEditingContext() qualifier=3DqualifierWithQualifierFormat('lastName=3D=3D"Dard"') fetchSpec=3DFetchSpecification('Writer', qualifier) dard=3Dec.objectsWithFetchSpecification(fetchSpec)[0] idx=3D1 myBooks=3Ddard.getBooks() print '### before loop' for book in myBooks: print '### loop: %i, title: %s'%(idx,book.getTitle()) idx+=3D1 I get (stripping unnecessary log msgs): Evaluating: SELECT t0.ID, t0.FIRST_NAME, t0.LAST_NAME, t0.AGE, t0.BIRTHDAY,= t0.FK_WRITER_ID FROM WRITER t0 WHERE t0.LAST_NAME =3D 'Dard' rowcount: 1 Returning: {'id': 3, 'age': 82, 'lastName': 'Dard', 'FK_Writer_= id': 2, 'firstName': 'Frederic', 'birthday': <DateTime object for '1921-06-= 29 04:56:34.00' at 81699f0>} ### before loop Evaluating: SELECT t0.id, t0.title, t0.FK_WRITER_ID, t0.PRICE FROM BOOK t0 = WHERE (t0.FK_WRITER_ID =3D 3) rowcount: 3 Returning: {'title': 'Bouge ton pied que je voie la mer', 'id':= 2, 'price': None, 'FK_Writer_Id': 3} rowcount: 3 Returning: {'title': 'Le coup du pere Francois', 'id': 3, 'pric= e': None, 'FK_Writer_Id': 3} rowcount: 3 Returning: {'title': "T'assieds pas sur le compte-gouttes", 'id= ': 4, 'price': None, 'FK_Writer_Id': 3} rowcount: 3 Returning: None ### loop: 1, title: Bouge ton pied que je voie la mer ### loop: 2, title: Le coup du pere Francois ### loop: 3, title: T'assieds pas sur le compte-gouttes -- S=E9bastien. |
From: Yannick G. <yan...@sa...> - 2003-06-18 21:04:22
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi,=20 As I understand (looking at my bandwidth monitor), to-many relations are lazily retrieved. ex:=20 myBooks =3D meAsAnAuthor.getBooks() # does not fetch anything=20 for book in myBooks: # fetch a record from the DB each loop print book.getTitle() Is there a way to have to fetch the complete meAsAnAuthor with all it's books once to avoid the connection latency of a fetch for each book ? =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. (514) 276-5468 =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQE+8NPTrhy5Fqn/MRARAp7mAKCJNMRwCKA4GqArkzGshNnHht6zkgCcDupB /G/gpGlgtvHZkLFHYfT1ijQ=3D =3Dw/SO =2D----END PGP SIGNATURE----- |
From: Mario R. <ma...@ru...> - 2003-06-17 18:32:10
|
On jeudi, juin 12, 2003, at 16:34 Europe/Amsterdam, Sebastien Bigaret=20 wrote: > > Hi all, > > Some more comments about the API change proposal; I concentrate here = on > EditingContext, and will post another message for CustomObject and > related APIs. > > Note: I also received two private answers. > > * Again: methods will be aliased, not replaced, > > * EC.insert() and EC.delete() are okay to everyone. > > * autoInsertion()/setAutoInsertion(): the former is a getter, the > latter, a setter > > Mario> OK. What about commit() for saveChanges()? > > * about commit(): that was initially in my proposal. I removed > it. Remember I'm working on integrating the ZODB transaction and > observation mechanisms into the modeling? Well, then, commit()=20 > clashes > with py2.1 zodb's transaction protocol (and has a very different > semantic than saveChanges(): it is equivalent to = objectWillChange()). > Hence I'd prefer to let it apart from the proposal for the moment > being. OK. > * EC.fetch(): > > The initial proposal is accepted by everyone so we'll keep it as a=20= > basis. > > Mario> Another issue is that even with all the keyword options=20 > offered, one > Mario> still has to "dip into" small pieces of raw sql very quickly,=20= > as shown > Mario> by your examples. > > Here I strongly disagree with you. It /seems/ that you dip into = small > pieces of raw sql, but it's not the case at all. Qualifiers and > FetchSpecification properties offers a generic command language; > consider this: > > ec.fetch('Book', 'author.pygmalion.lastName ilike "r*"') > > -> this is far from the generated sql: > > SELECT t0.id, t0.title, t0.FK_WRITER_ID, t0.PRICE > FROM BOOK t0 > INNER JOIN ( WRITER t1 > INNER JOIN WRITER t2 > ON t1.FK_WRITER_ID=3Dt2.ID ) > ON t0.FK_WRITER_ID=3Dt1.ID > WHERE UPPER(t2.LAST_NAME) LIKE UPPER('r%'); > > In the framework we generally make the strong assumption that raw = sql > is taken out of the python code. Even if some of the keywords (such=20= > as > 'like'/'ilike' for qualifiers, or 'asc'/'desc' for ordering) are > actually the same than the corresponding sql keywords, they are=20 > indeed > decorrelated. See again the example above: the 'ilike' keyword is=20= > not > used, rather we compare UPPER()s (this might change in the future,=20= > now > that 'ilike' is a sql keyword accepted by a lot of db-servers ;) OK, sorry I am not appreciating enough the high-levelness of the query=20= api... Your example certainly is a nice generic separation, and simplification. However by "bits" of sql i intended really little bits, such as the age=20= param and the orderBy param, below: ec.fetch('Writer', 'age>=3D80"') ec.fetch('Writer', orderBy=3D'firstName asc, lastName desc', isDeep=3D1) > > Mario> This may be an acceptable compromise, but then client code=20 > should be > Mario> allowed to pass any SQL it wants... > > That's an other problem, to which I completely agree: we should have=20= > a > mean to execute complete raw sql statements, and get the (raw) = result > back. Moreover, we should be able to transform the returned raw rows > into real objects if necessary (and if possible). Very good. For "casting" a raw row to an object, probably a simple generic utility function could be enough. The only problem is the order of the tuple, and the mapping onto the attributes. >> Another detail is that the object loads all (direct) properties, = every >> time -- there may be no workaround for this, given the way the >> framework is built. > (also related to your 'resultset' proposal) > > Last, we should also be able to tell the framework to return the raw > rows instead of fully initialized objects (which can later be=20 > reverted > to real objects): there's a real need for that; sometimes you want = to > present a (very) long list of objects in a summary page/widget, but > you do not need the full objects, not even every attributes, but a > subset to present the user. Then the users selects one or more of=20 > this > rows and that's where you'll transform the raw rows to real objects. > > Note that the framework architecture will have no problem to support > this. Some of the needed APIs are already present but not = implemented > (such as DatabaseContext.faultForRawRow()). > > I thought this was on the todo list but it's not --I'll add that, > since I've been thinking about these points for quite a long time. OK > Impact on the API: > > when this is implemented I suggest we add the following parameters = to > fetch(): > > rawRows -- (default: false) return raw rows instead of objects > > sql -- execute the sql statement (and ignore all other parameters > except entityName and rawRows --both optional in that case) Ok for rawRows on fetch(). However, come to think of it, sql should be part of a different=20 function, with a different interface, and that always returns raw rows. Something like fetchSql(). As it is very nice to have a full query interface that is completely db independent, it is also convenient to be able to access specific db query api features. This could also supercede fetchCount() and other such possibilities. In this way, fetch() can be guaranteed to be db-independent, while fetchSql() may not be. > > BTW I also suggest that the unsupported features in the fetch API = are > removed until they are implemented (such as limit/page/offset/etc). OK. > >> I feel that some things (even if no one has requested them yet ;) are >> missing... >> for example, should one ever need to take advantage of sql "group by"=20= >> and >> "having". >> Access to these sql clauses may be added later, without breaking this=20= >> API, >> which is OK. Also, such manips may be done in the middle code, but >> that would be very inefficient. > [...] >> >> The real problem with this is that to request the result of an sql=20 >> function, >> of which count() is an example, additional api functionality is=20 >> needed. >> But what happens if I want other functions, such as SUM or MAX or=20 >> AVERAGE >> over a few columns? Each of these functions may take complex=20 >> parameters. >> Again, the functionality may be replicated in the middle code, but=20 >> this would >> not only be a waste of development time, but also be very = inefficient. >> >> I propose either a generalization of "select" (which may be too=20 >> complicated >> in its implications), or an addition of a func keyword option, e.g. >> >> func=3D'count' >> func=3D('funcName', param1, param2, ...) >> func=3D( ('funcName', param1, param2, ...), ('funcname2') ) >> >> func=3D('sum','age') >> >> The question is then how is this information returned? >> I suggest as a tuple... > [snipped] > > That's a very interesting idea, but this will take too much efforts > for it to be shortly developped. Let me explain that: if we do this > just as it sounds, then we will have *pieces* of raw sql in the=20 > middle > of generic command patterns --I don't like that. Do not = misunderstand > me: it should be possible to abstract this in a certain way, but I > really wonder if it's worth the effort. This is certainly not high priority. It is a good point to consider now to know that you will not be stuck with a limited API. > And there's more problems, > consider this: > > - either you simply want to sum()/avg()/max()/etc. on a table and > its attributes, and then I guess it's probably sufficient to=20 > offer > the possibility to fetch(sql=3D'select max(age) from author'), = as > stated above; > > - or you want to use this along with the automatic generation of > complex queries (e.g. with multiple joins): okay, but then you > must be able to say which attributes you want, and it's=20 > definitely > not sufficient to tell which table it belongs to: relations can=20= > be > reflexive (such as the 'pygmalion' relationship in the=20 > author/book > model), and in such cases you have two possibilities for > Author.age: table alias t1 or t2 (referring to the sql = statements > above). This also means that this expression needs to be bound = to > the automatic generation of sql statements. Yes, this second one is certainly difficult. For now, separation of possibility to make raw sql fetches should be enough. > I can't think of a straightforward way to do this by now. Again, I'm > not saying this is impossible: I'm just playing with the interesting > idea and explaining the difficulties I can foresee, wondering = whether > such an advanced functionality would be worth the effort. That's an > open question, and for the moment being I suggest we do not take = this > into account _as far as the API change proposal is concerned_. This > could be discussed in a separate thread, and it would help a lot if=20= > we > had some real-life examples showing where this could be very handy. > --> Same for 'group by' and 'having' statements by the way, since=20 > I'm not > really familiar with them either. > > > Mario> I would also add "by indicating clearly the small subset of=20 > methods > Mario> intended for use by client code, and the stability level". > > Right. I'm still looking for a way to include this in the docstrings > so that the generation of the API can eat it (FYI we use epydoc). OK, great. mario > -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-06-17 18:05:30
|
> > Sebastien Bigaret <sbi...@us...> wrote: >> Mario> S=E9bastien, do you think you will integrate the SQLite = adapter >> Mario> you had announced some time ago, for 0.9? >> >> Yes, it will be integrated in 0.9 (and possibly before). > > The new adaptor layer for SQLite has been added to cvs today, and=20 > will > participate in the next release. > > -- S=E9bastien. Great, thanks. Sorry been so virtual lately... Hope to try this one out soon. mario |