modeling-users Mailing List for Object-Relational Bridge for python (Page 42)
Status: Abandoned
Brought to you by:
sbigaret
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(19) |
Feb
(55) |
Mar
(54) |
Apr
(48) |
May
(41) |
Jun
(40) |
Jul
(156) |
Aug
(56) |
Sep
(90) |
Oct
(14) |
Nov
(41) |
Dec
(32) |
2004 |
Jan
(6) |
Feb
(57) |
Mar
(38) |
Apr
(23) |
May
(3) |
Jun
(40) |
Jul
(39) |
Aug
(82) |
Sep
(31) |
Oct
(14) |
Nov
|
Dec
(9) |
2005 |
Jan
|
Feb
(4) |
Mar
(13) |
Apr
|
May
(5) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2006 |
Jan
(1) |
Feb
(1) |
Mar
(9) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(5) |
Aug
|
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Mario R. <ma...@ru...> - 2003-02-01 18:54:39
|
Thanks for the changes... But, build (and install) fails: % python setup.py build ... copying=20 Modeling/DatabaseAdaptors/MySQLAdaptorLayer/MySQLSQLExpression.py ->=20 build/lib/Modeling/DatabaseAdaptors/MySQLAdaptorLayer running build_scripts creating build/scripts-2.2 error: file 'Modeling/scripts/generate_DB_schema.py' does not exist Cheers, mario On samedi, f=E9v 1, 2003, at 20:01 Europe/Amsterdam, Sebastien Bigaret=20= wrote: > > Hi all, > > The core v0.8.2 has just been released today. > > Main changes are: > > - full support for postgresql 7.3 > > - addition of the script 'generate_DB_schema.py' to (re)initialize=20= > a > database from an xml-model ; this functionality was previously > only available within the ZModeler, now this can be done on the > command-line. > > Other changes are bug-fixes as described in the changelog, included=20= > at > the end of the message. Some of these adress bugs recently reported > here, such as the test_EditingContext_Global failing (w/ postgresql > 7.3) when triggered more than once with '-r' option. > > Last, two environment variables are added: > > - POSTGRESQL_SERVER_VERSION (possible values: '7.2' or '7.3') = --see > below for details > > - ENABLE_DATABASE_LOGGING: set to an non-empty strings, enables > logging at DatabaseAdaptors level. > > Best regards, > > -- S=E9bastien. > > > --------------- > 0.8.2 CHANGELOG > --------------- > > * Added scripts/generate_DB_schema.py (-h for help) > > * Generation of SQL statements: > > (Modeling, Modeling.interfaces, PostgresqlAdaptorLayer) > SchemaGeneration: the three following methods were added in the=20 > API: > > - dropForeignKeyConstraintStatementsForRelationship > - dropPrimaryKeyConstraintStatementsForEntityGroup(s) > > along with their associated constants DropForeignKeyConstraintsKey > andDropPrimaryKeyConstraintsKey. > > The SchemaGeneration.defaultOrderingsForSchemaCreation() has also > been corrected (was wrong: alternation of drop+create statements > instead of having drop statements come in first, then create > statements in reverse-order) > > > + PostgresqlAdaptorLayer: now correctly generates create/drop SQL > statements for postgresql versions 7.2 and 7.3. Environment > variable 'POSTGRESQL_SERVER_VERSION' allows you to specify with > which version of the pg-server you're working with (default: = 7.2) > [Added for that purpose: PostgresqlAdaptorLayer.pg_utils] > > Note: this corrects a bug in test_EditingContext_Global.py = which, > when triggered with the '-r' option (recreate database),=20= > was > not able to actually recreate the database for pg-server > v7.3. Thank you Mario <ma...@ru...> for the bug > report! > > * Postgresql & psycopg: > PostgresqlAdaptorContext/adaptorChannelDidClose() now rollbacks = the > underlying connnection object when using psycopg --if not done,=20 > this > leads to strange behaviour because changes committed by others are > never seen. > > * SQLExpression: changes in the set of supported SQL datatypes > > - SQLExpression.valueTypeForExternalTypeMapping(): Removed = datetime > & timestampz from the list, added date & time > > - Added MySQL/PostgresqlSQLExpression to the corresponding = Database > Adaptors Layer: > > Postgresql: additional supported datatypes are: 'datetime' > (warning: not supported anymore for postgresql > version>=3D7.3), 'timestamp without time zone' and > 'timestamp with time zone', > > MySQL: additional supported datatype: 'datetime', removed > datatype: 'timestamp' (see > DatabaseAdaptors.MySQLAdaptorLayer.MySQLSQLExpression for a > complete explanation) > > NB: ModelValidation has been updated to notify the problem with > postgresql and 'datetime' > > * Database Adaptors logging: > > - they do not log anymore the password they use when connecting = to > a database (replaced by 'xxxx' so that a bug-report containing > log msgs or tracebacks will not disclose the password by=20 > mistake) > > - it is not activated by default, except for error and fatal > msgs. To activate them, set the environment variable > ENABLE_DATABASE_LOGGING to any non-empty string. > > * Fixed Qualifier.QualifierOperatorLike and > QualifierOperatorCaseInsensitiveLike: they were failing when > comparing the pattern to a value which is not a string, such as an > integer or a mxDateTime. > > * ObjectStore: Added ownsObject(), and made handlesObject() an alias > for ownsObject(). > > * tests.test_EC_Global.test_999_customSQLQuery() fixed: it failed w/ > the pypgsql adaptor --see comments in code for details. > > * PostgreqlAdaptorLayer & MySQLAdaptorLayer: Fixed: useless import > statements in __init__ were shadowing modules PostgresqlAdaptor = and > MySQLAdaptor > > > > ------------------------------------------------------- > This SF.NET email is sponsored by: > SourceForge Enterprise Edition + IBM + LinuxWorld Something 2 See! > http://www.vasoftware.com > _______________________________________________ > Modeling-users mailing list > Mod...@li... > https://lists.sourceforge.net/lists/listinfo/modeling-users |
From: Sebastien B. <sbi...@us...> - 2003-02-01 18:01:19
|
Hi all, The core v0.8.2 has just been released today. Main changes are: - full support for postgresql 7.3 - addition of the script 'generate_DB_schema.py' to (re)initialize a database from an xml-model ; this functionality was previously only available within the ZModeler, now this can be done on the command-line. Other changes are bug-fixes as described in the changelog, included at the end of the message. Some of these adress bugs recently reported here, such as the test_EditingContext_Global failing (w/ postgresql 7.3) when triggered more than once with '-r' option. Last, two environment variables are added: - POSTGRESQL_SERVER_VERSION (possible values: '7.2' or '7.3') --see below for details - ENABLE_DATABASE_LOGGING: set to an non-empty strings, enables logging at DatabaseAdaptors level. Best regards, -- S=E9bastien. --------------- 0.8.2 CHANGELOG --------------- * Added scripts/generate_DB_schema.py (-h for help) * Generation of SQL statements: (Modeling, Modeling.interfaces, PostgresqlAdaptorLayer) SchemaGeneration: the three following methods were added in the API: - dropForeignKeyConstraintStatementsForRelationship - dropPrimaryKeyConstraintStatementsForEntityGroup(s) along with their associated constants DropForeignKeyConstraintsKey andDropPrimaryKeyConstraintsKey. The SchemaGeneration.defaultOrderingsForSchemaCreation() has also been corrected (was wrong: alternation of drop+create statements instead of having drop statements come in first, then create statements in reverse-order) + PostgresqlAdaptorLayer: now correctly generates create/drop SQL statements for postgresql versions 7.2 and 7.3. Environment variable 'POSTGRESQL_SERVER_VERSION' allows you to specify with which version of the pg-server you're working with (default: 7.2) [Added for that purpose: PostgresqlAdaptorLayer.pg_utils] Note: this corrects a bug in test_EditingContext_Global.py which, when triggered with the '-r' option (recreate database), was not able to actually recreate the database for pg-server v7.3. Thank you Mario <ma...@ru...> for the bug report! * Postgresql & psycopg: PostgresqlAdaptorContext/adaptorChannelDidClose() now rollbacks the underlying connnection object when using psycopg --if not done, this leads to strange behaviour because changes committed by others are never seen. * SQLExpression: changes in the set of supported SQL datatypes =20=20=20 - SQLExpression.valueTypeForExternalTypeMapping(): Removed datetime & timestampz from the list, added date & time - Added MySQL/PostgresqlSQLExpression to the corresponding Database Adaptors Layer: Postgresql: additional supported datatypes are: 'datetime' (warning: not supported anymore for postgresql version>=3D7.3), 'timestamp without time zone' and 'timestamp with time zone', MySQL: additional supported datatype: 'datetime', removed datatype: 'timestamp' (see DatabaseAdaptors.MySQLAdaptorLayer.MySQLSQLExpression for a complete explanation) NB: ModelValidation has been updated to notify the problem with postgresql and 'datetime' * Database Adaptors logging: - they do not log anymore the password they use when connecting to a database (replaced by 'xxxx' so that a bug-report containing log msgs or tracebacks will not disclose the password by mistake) - it is not activated by default, except for error and fatal msgs. To activate them, set the environment variable ENABLE_DATABASE_LOGGING to any non-empty string. * Fixed Qualifier.QualifierOperatorLike and QualifierOperatorCaseInsensitiveLike: they were failing when comparing the pattern to a value which is not a string, such as an integer or a mxDateTime. * ObjectStore: Added ownsObject(), and made handlesObject() an alias for ownsObject(). * tests.test_EC_Global.test_999_customSQLQuery() fixed: it failed w/ the pypgsql adaptor --see comments in code for details. * PostgreqlAdaptorLayer & MySQLAdaptorLayer: Fixed: useless import statements in __init__ were shadowing modules PostgresqlAdaptor and MySQLAdaptor |
From: Mario R. <ma...@ru...> - 2003-01-31 00:55:21
|
Hello, ... > - model design, model validation, generation of databases and schemas, > generation of python classes are available in the ZModeler. Yes, and they are all very nice things to have... > - Updating a database schema after changes made on a model is not > available --that feature would be in the tool, not in the > core. Reverse-engineering an existing database is not available,=20 > it's > been on the todo list for a certain time. Absolutely. > In a previous mail you were speaking of the prominence of the zmodeler > and suggesting a different organization of the documentation. This > should be done, for sure. I've begun writing a tutorial, hence the pb=20= > of > making the code/sql generation available apart from zope was = identified > and I should be able to release the scripts soon. Ah, very good to have the script. Also the tutorial. If you like I can review the tutorial, as well the the other docs... > - Documentation for the xml format: in a way, there is some, even it's > not that obvious... In fact, section 2.2 in the User's Guide=20 > describes > each element that can be found in a xmlmodel (sample xml models are=20= > in > testPackages.AuthorBooks and StoreEmployees). Yes, it is there, but it reads more like a brief overview of main=20 elements and attributes, leaving one wondering what the complete picture is. Having the XML spec available will take off the pressure on the general=20= explanation, allowing it to highlight only what is typically most pertinent and not=20= get lost in detail. The XML spec could be its XSD (although i'd prefer another more=20 expressive (human) definition syntax... in any case, it should only be up to half=20= a page long. Also, I do not think that validation of the XML (per se) would be=20 particularly useful, as it would not imply much about how valid the represented model is=20 anyway. ... > I can't see precisely how to design a csv that maps nicely to the xml, > but I'm open to suggestions! Same, anyone feeling like coding some=20 > tools > will be welcome ;) and in that case, I'll start a dev-branch and share > the unreleased-coz'-unfinished code I already have. CSV may or may not be appropriate, given that you do have some nesting in your format. However, after I familiarize myself better with the=20 schema, I could offer something more concrete. On this issue, another possibly interesting way to handle this "description language" is using a=20 construct or mini language in python itself... ... >> This could be an optimization later -- instead of sorting on the = store >> it might be faster to do a db query to get only the sort info (get >> only the id?) and then iterate on that sort order using the data in >> the store (or getting it when missing). > > That's the idea. The only thing that needs to be done is to generate=20= > SQL > code from a SortOrdering object. OK. >> ... all requests >> for view-only data are serviced by a common read-only object store, >> while all modify-data requests are serviced by a (short-lived) user >> store that could be independent from read-only store, or could be a >> nested store to it. > > A nested store to a read-only object store? That's a very, very good > idea, i didn't think about that --must be still too impregnated with=20= > the > way the EOF handles that. This would make it easy to update > mostly-read-objects while keeping the memory footprint low. I just > dropped a note in the TODO list about that, thanks! Ah, yes. Your TODO list is very comprehensive -- you will be busy with this for the next few years \-? Hmmn, i'd say focus on the docs and web=20= site, and announcements, to attract a few more users -- who will certainly=20 help with the TODO list (hopefully not only making it longer :-) >> Again, cost of fetching is (much) less important here. In addition, >> normally data is modified in small units, typically 1 object at a >> time. But when browsing, it is common to request fetches of 1000s of >> objects... > > That's exactly why there should be the possibility to fetch and get = raw > rows, not objects. There isn't? But can always do a raw select... >>> However, even in the still-uncommited changes, the framework >>> lacks an important feature which is likely to be the next thing >>> I'll work on: the ability to broadcast the changes committed on >>> one object store to the others (in fact, the notifications are >>> already made, but they are not properly handled yet). >> > >> Yes, this would be necessary. Not sure how the broadcast model will >> work (may not always know who the end clients are -- data on a client >> web page will never know about these changes even if you >> broadcast). But, possibly a feasible simple strategy could be that >> each store will keep a copy of the original object as it received >> it. > > There is a central and shared object which stores the snapshots when > rows are fetched ; you'll find that in class Database. FYI the > responsability of broacasting messages is assigned to the > NotificationCenter distributed in the NotificationFramework apart from > the Modeling. OK. Have not yet dived into the code, but will do. >> When committing a modify it compares the original object(s) being >> modified to those in the db, if still identical, locks the objects = and >> commits the changes, otherwise returns an error "object has changed >> since". This behaviour could be application specific, i.e. in some >> cases you want to go ahead and modify anyway, and in others you want >> to do something else. Thus, this could be a store option, with = default >> being the error. > > What you describe here seems to be optimistic locking: when updated,=20= > the > corresponding database row is compared against the snapshots we got at > the time the object wes fetched, and if they do not match, this an=20 > error > (recoverable, in case you want to update anyway, take whatever action > depending on the differences and the object and object's type, etc.). > > What I was talking about is something different. Suppose S1 and S2 are > two different object stores, resp. holding objects O1 and o1 that > correspond to the very same row in the database (in other words, they > are clones living each in its world). Now suppose that o1 is updated=20= > and > saved back to the database: you may want to make sure the O1 gets the > changes as well (of course, you may also request that changes that=20 > would > have been made on it should be re-applied after they are broadcasted) > >> In combination with this, other stores may still be notified of >> changes, and it is up to them (or their configuration) whether to >> reload the modified objects, or to ignore the broadcasts. > > (ok, i should read more carefully what follows before answering ;) OK, so we are meaning the same thing here. >> Otherwise, on each fetch the framework could check if any changes = have >> taken place (may be expensive) for the concerned objects, and reload >> them automatically. This has the advantage of covering those cases >> when changes to data in the db are done by some way beyond the >> framework, and thus the framework can never be sure to know about it. > > Yes, this would be expensive, and this is already available to a=20 > certain > extent. You can ``re-fault''/invalidate your objects so that they are > automatically re-fetched next time they are accessed. I understand = this > is not *exactly* the way you think of it, but that's kind of an > implementation and optimization detail, except if you expect objects = to > be refreshed while not overriding the changes that were already made > --this is not possible yet. This may be enough, then. Do not know if this is already there, but i guess it would be also=20 useful to be able to simply request to force a fresh fetch -- if any of the=20 objects are already in the store they are reloaded. >>> Last and although this is not directly connected to that topic, I'd >>> like to make an other point more precise: the framework does not >>> implement yet any locking, either optimistic or pessimistic, on DB's >>> rows. This means that if other processes modify a row in the DB, an >>> ObjectStore will not notice the changes and will possibly override >>> them if the corresponding object has been modified in the meantime. >> >> Yes, but locking objects should be very short lived -- only the time >> to commit any changes, as mentioned above. This also makes it much >> less important, and only potentially a problem when two processes try >> to modify the same object at the same time. > > Postgresql already locks rows during updates (I cant remember what=20 > MySQL > does) but anyway I do not plan to support this kind of short-living > locks --to my point of view they should be managed by the database > itself. Agreed. > What I meant was ``locking policy''=20 > (http://minnow.cc.gatech.edu/squeak/2634). > Pessimistic locking can result in long-standing locks, from the moment > an object is *about* to be updated to the moment it is made persistent > --this might be an application requirement that an object cannot be > edited by more than one person at a time. In general I am wary of this kind of behaviour as (a) it probably=20 increases the possibility of clashes and "deadlocks", as mentioned in the linked=20 article and (b) it is heavier on the server and (c) more difficult to program=20 for, and introduces an additional set of possible problems, e.g. what if, after the=20 application, after acquiring a lock on some objects, runs into problems and never gives it=20= up? Bad programming? Maybe, but the program should not have to worry about=20= this. Also, on requiring an object to be modified by only one person at a=20 time -- this can always be handled with the mechanism described above, namely that=20 before committing a change, the "original" object is compared to the db=20 object, and if different the error is raised (and may be ignored). Besides, when would=20= such a case require this? Aside, even systems like CVS do not restrict that=20= a co item be modified by only one person. > Well, I hope the overview of what the framework does /not/ support=20= > yet > does not completely hide what it already does! Hey, my interest in Modeling is based on what it does, and how it does=20= it... But, as i mentioned above, it would help it if all that it actually=20 does already is given more prominence... > Regards, > > -- S=E9bastien. Regards, mario |
From: Mario R. <ma...@ru...> - 2003-01-30 00:16:18
|
Hello, ... Ok, fine. I've just made some changes on the CVS tree (see CHANGES), but > there's still some work to be done within the test-suite so that the=20= > model > that uses 'timestamp' switches back to 'datetime' when MySQL is used.=20= > The > reason for this is that the datatype 'timestamp' has an=20 > automatic-update > feature on MySQL that I cannot handle --the framework just cannot=20 > foresee > whether it has been changed or not. That's quite unfortunate. However, it also makes me ask why does the XML model include the adaptorName and connectionDictionary attributes? These are specifying mysql or postgres... so the rest of the model defn should comply? I know, it should be independent, but then also both=20 these attributes should not appear here (in any case they make the model, as=20= is, not portable). ... >> Evaluating: DROP TABLE WRITER >> Couldn't evaluate expression DROP TABLE WRITER. Reason: >> libpq.OperationalError:ERROR: Cannot drop table writer because other >> objects depend on it >> Use DROP ... CASCADE to drop the dependent objects too > > Although logical, that's amazing, I never get this with postgres=20 > 7.2.1. I've > to check that, and see if I can find a common behaviour. If someone as=20= > any > hints about this, please share! No ideas, but why is it logical? ... > Last, can you tell which python version you're currently using? As of a couple of days I am running python 2.2.2, built locally. > -- S=E9bastien. Thanks again for the fixes. Now I need to actually try something more real with it... Cheers, mario= |
From: Sebastien B. <sbi...@us...> - 2003-01-28 23:19:50
|
Hi, Back to your comments: > > Sure, using the ZModeler is much, much more comfortable than > > editing an xml-file. But I understand installing and launching > > Zope just for that can be a burden --I'm currently looking for an > > alternate solution. >=20 > Yes, but this should not be considered high priority. I feel it is > more important to expose (to document) the interaction between the > different layers (tool - model description (xml) - python & sql). > Specifically, documentation for the XML format, documentation of the > constraints on the generated python classes and on the constraints on > the generated sql for the tables, and the utility or code to generate > the python classes and the sql to init and/or update the database with > the corresponding tables. As I understand, generation of python > classes and sql statements is only available via the Z tool at the > moment? (Apologies beforehand if this info is in the doc already and I > have missed it...) No need to apology, you're pointing out the right things! - model design, model validation, generation of databases and schemas, generation of python classes are available in the ZModeler. - Updating a database schema after changes made on a model is not available --that feature would be in the tool, not in the core. Reverse-engineering an existing database is not available, it's been on the todo list for a certain time. In a previous mail you were speaking of the prominence of the zmodeler and suggesting a different organization of the documentation. This should be done, for sure. I've begun writing a tutorial, hence the pb of making the code/sql generation available apart from zope was identified and I should be able to release the scripts soon. - Documentation for the xml format: in a way, there is some, even it's not that obvious... In fact, section 2.2 in the User's Guide describes each element that can be found in a xmlmodel (sample xml models are in testPackages.AuthorBooks and StoreEmployees). > Given this, it is quite likely that others may come up with such > tools... > For example it could be quite simple to, borrowing the CSV idea, > generate your XML from a spreadsheet... opening up the possibility to > use any old spreadsheet program as a basic model editor. Or, if the > XML will not map nicely to CSV, some other cooked editor for this XML > doctype. I can't see precisely how to design a csv that maps nicely to the xml, but I'm open to suggestions! Same, anyone feeling like coding some tools will be welcome ;) and in that case, I'll start a dev-branch and share the unreleased-coz'-unfinished code I already have. [...] > > 2. Modeling.SortOrdering is specifically designed for sorting=20 > > purpose, > > to be used along with Qualifiers in FetchSpecifications, but f= or > > the moment it can only be used for in-memory sorts. >=20 > This could be an optimization later -- instead of sorting on the store > it might be faster to do a db query to get only the sort info (get > only the id?) and then iterate on that sort order using the data in > the store (or getting it when missing). That's the idea. The only thing that needs to be done is to generate SQL code from a SortOrdering object. > > [about] having a shared ObjectStore where mostly-read-only objects > > lives and are shared by all other object store. This is ca= lled > > a Shared Editing Context in the EOF. >=20 > Could be powerful in combination with previous feature -- all requests > for view-only data are serviced by a common read-only object store, > while all modify-data requests are serviced by a (short-lived) user > store that could be independent from read-only store, or could be a > nested store to it. A nested store to a read-only object store? That's a very, very good idea, i didn't think about that --must be still too impregnated with the way the EOF handles that. This would make it easy to update mostly-read-objects while keeping the memory footprint low. I just dropped a note in the TODO list about that, thanks! > Again, cost of fetching is (much) less important here. In addition, > normally data is modified in small units, typically 1 object at a > time. But when browsing, it is common to request fetches of 1000s of > objects... That's exactly why there should be the possibility to fetch and get raw rows, not objects. > > However, even in the still-uncommited changes, the framework > > lacks an important feature which is likely to be the next thing > > I'll work on: the ability to broadcast the changes committed on > > one object store to the others (in fact, the notifications are > > already made, but they are not properly handled yet). >=20 > Yes, this would be necessary. Not sure how the broadcast model will > work (may not always know who the end clients are -- data on a client > web page will never know about these changes even if you > broadcast). But, possibly a feasible simple strategy could be that > each store will keep a copy of the original object as it received > it. There is a central and shared object which stores the snapshots when rows are fetched ; you'll find that in class Database. FYI the responsability of broacasting messages is assigned to the NotificationCenter distributed in the NotificationFramework apart from the Modeling. > When committing a modify it compares the original object(s) being > modified to those in the db, if still identical, locks the objects and > commits the changes, otherwise returns an error "object has changed > since". This behaviour could be application specific, i.e. in some > cases you want to go ahead and modify anyway, and in others you want > to do something else. Thus, this could be a store option, with default > being the error. What you describe here seems to be optimistic locking: when updated, the corresponding database row is compared against the snapshots we got at the time the object wes fetched, and if they do not match, this an error (recoverable, in case you want to update anyway, take whatever action depending on the differences and the object and object's type, etc.). What I was talking about is something different. Suppose S1 and S2 are two different object stores, resp. holding objects O1 and o1 that correspond to the very same row in the database (in other words, they are clones living each in its world). Now suppose that o1 is updated and saved back to the database: you may want to make sure the O1 gets the changes as well (of course, you may also request that changes that would have been made on it should be re-applied after they are broadcasted) > In combination with this, other stores may still be notified of > changes, and it is up to them (or their configuration) whether to > reload the modified objects, or to ignore the broadcasts.=20 (ok, i should read more carefully what follows before answering ;) > Otherwise, on each fetch the framework could check if any changes have > taken place (may be expensive) for the concerned objects, and reload > them automatically. This has the advantage of covering those cases > when changes to data in the db are done by some way beyond the > framework, and thus the framework can never be sure to know about it. Yes, this would be expensive, and this is already available to a certain extent. You can ``re-fault''/invalidate your objects so that they are automatically re-fetched next time they are accessed. I understand this is not *exactly* the way you think of it, but that's kind of an implementation and optimization detail, except if you expect objects to be refreshed while not overriding the changes that were already made --this is not possible yet. > > Last and although this is not directly connected to that topic, I'd > > like to make an other point more precise: the framework does not > > implement yet any locking, either optimistic or pessimistic, on DB's > > rows. This means that if other processes modify a row in the DB, an > > ObjectStore will not notice the changes and will possibly override > > them if the corresponding object has been modified in the meantime. >=20 > Yes, but locking objects should be very short lived -- only the time > to commit any changes, as mentioned above. This also makes it much > less important, and only potentially a problem when two processes try > to modify the same object at the same time. Postgresql already locks rows during updates (I cant remember what MySQL does) but anyway I do not plan to support this kind of short-living locks --to my point of view they should be managed by the database itself.=20 What I meant was ``locking policy'' (http://minnow.cc.gatech.edu/squeak= /2634). Pessimistic locking can result in long-standing locks, from the moment an object is *about* to be updated to the moment it is made persistent --this might be an application requirement that an object cannot be edited by more than one person at a time. Well, I hope the overview of what the framework does /not/ support yet does not completely hide what it already does! Regards, -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-01-28 19:17:58
|
Hi, > > I suggest that you edit > > tests/testPackages/AuthorBooks/model_AuthorBooks.xml and > ... > > so that it says (change externalType DATETIME to TIMESTAMP): >=20 > Changing datetime to timestamp works perfectly. From a clean databas= e, the > following sequence of tests run OK (except for dropping of non-exista= nt > tables): Ok, fine. I've just made some changes on the CVS tree (see CHANGES), but there's still some work to be done within the test-suite so that the mo= del that uses 'timestamp' switches back to 'datetime' when MySQL is used. T= he reason for this is that the datatype 'timestamp' has an automatic-update feature on MySQL that I cannot handle --the framework just cannot fores= ee whether it has been changed or not. Details are in MySQLAdaptorLayer.MySQLSQLExpression and at: http://www.mysql.com/documentation/mysql/bychapter/manual_Reference.htm= l#DATETIME > However, running the tests a 2nd time gives errors, namely because > the now existent tables cannot be dropped. The first error occurs in = the > 2nd script step, for which I am including the output below. This shou= ld > probably be corrected. > [...] > # output log: > Evaluating: DROP TABLE WRITER > Couldn't evaluate expression DROP TABLE WRITER. Reason:=20=20 > libpq.OperationalError:ERROR: Cannot drop table writer because other= =20=20 > objects depend on it > Use DROP ... CASCADE to drop the dependent objects too Although logical, that's amazing, I never get this with postgres 7.2.1.= I've to check that, and see if I can find a common behaviour. If someone as = any hints about this, please share! > I am also suggesting a slightly modified tests/README... Well, it seems a lot clearer than what I wrote, and it now replaces the= former one. Thanks! Last, can you tell which python version you're currently using? -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-01-28 17:22:15
|
For info... I am forwarding this announcement on the webware list, about yet another OO to RDBMS bridge... And, while we are at it, there is also this module as part of the twisted framework: Twisted Enterprise Row Objects: http://twistedmatrix.com/documents/howto/row m. Begin forwarded message: > From: Ian Bicking <ia...@co...> > Date: Mar jan 28, 2003 10:06:27 Europe/Amsterdam > To: Webware discuss <web...@li...> > Subject: [Webware-discuss] ANN: SQLObject 0.2 > > SQLObject is an object-relational mapper (i.e., a database/SQL wrapper) > that supports MySQL and PostgreSQL. Version 0.2 is the first public > release. > > The software can be found at: > http://colorstudy.com/software/SQLObject > > Features: > - Takes advantage of new-style classes (both properties and > metaclasses). Hi-Fi Python! (Python 2.2 required) > - No external formats or configuration (e.g. XML); your tables are > described with the class definition. > - Accessing a column appears the same as accessing any other attribute. > - Accompanying SQLBuilder allows WHERE clauses to be coded (somewhat) > naturally in Python without SQL. > - Caching (or no caching, no pressure). > - Fully documented (interface documentation plus source comments). > - Simple (naive?) transaction support for PostgreSQL. > - Conforms to traditional schema; should be adapter to legacy schemas. > - Understands basic relations. > - The string "get" does not appear in any method. > - 100% larger version number than previous version. > > License: > LGPL > > Author: > Me > > -- > Ian Bicking Colorstudy Web Development > ia...@co... http://www.colorstudy.com > PGP: gpg --keyserver pgp.mit.edu --recv-keys 0x9B9E28B7 > 4869 N Talman Ave, Chicago, IL 60625 / (773) 275-7241 > > > > ------------------------------------------------------- > This SF.NET email is sponsored by: > SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! > http://www.vasoftware.com > _______________________________________________ > Webware-discuss mailing list > Web...@li... > https://lists.sourceforge.net/lists/listinfo/webware-discuss |
From: Mario R. <ma...@ru...> - 2003-01-28 17:22:12
|
Hi, OK, thanks for the postgres debugging (sorry i had not looked there myself). I am running postgres 7.3.1 (btw, postgres does not come by default on osx but has to be installed...). > I suggest that you edit > tests/testPackages/AuthorBooks/model_AuthorBooks.xml and ... > so that it says (change externalType DATETIME to TIMESTAMP): Changing datetime to timestamp works perfectly. From a clean database, the following sequence of tests run OK (except for dropping of non-existant tables): % python run.py % python test_EditingContext_Global.py -r % python test_EditingContext_Global.py % python test_EditingContext.py % python test_EditingContext_Global_Inheritance.py -r % python test_EditingContext_Global_Inheritance.py However, running the tests a 2nd time gives errors, namely because the now existent tables cannot be dropped. The first error occurs in the 2nd script step, for which I am including the output below. This should probably be corrected. I am also suggesting a slightly modified tests/README... mario ================================= # tests/README Test the installation of the Modeling framework -------------------------------- To test the installation, perform the following sequence of commands or manual operations: % cd <unpacked Modeling package>/Modeling/tests % python ./run.py # running with -h gives usage # Remaining tests require: # - that a PostgreSQL and/or MySQL database server is running somewhere # - you modify the file ./test.cfg to reflect you own PostgreSQL/MySQL installation # - two different databases (may be empty) with names: "AUTHOR_BOOKS" & "STORE_EMPLOYEES" # Initialise(*)(**) EditingContext test db (uses model in ./testPackages/AuthorBooks/model_AuthorBooks.xml): % python test_EditingContext_Global.py -r # initialise db % python test_EditingContext_Global.py # run tests % python test_EditingContext.py #- Initialise(*) Inheritance test db (uses model in ./testPackages/StoreEmployees) % python test_EditingContext_Global_Inheritance.py -r # initialise db % python test_EditingContext_Global_Inheritance.py # run test -- (*) Note that, the first time this script is run on an empty database, it is OK for the DROP statements to fail (but only the DROP statements), with error messages similar to: Evaluating: DROP SEQUENCE PK_SEQ_WRITER Couldn't evaluate expression DROP SEQUENCE PK_SEQ_WRITER. Reason: libpq.OperationalError:ERROR: sequence "pk_seq_writer" does not exist (**) For PostgreSQL versions > 7.0, models that use the deprecated types of datetime and timespan will give an error -- the respectively equivalent types timestamp and interval should be used instead. ================================= # output log % python test_EditingContext_Global.py -r PostgresqlAdaptor: using pypgsql Creating PostgresqlAdaptorChannel <PostgresqlAdaptorLayer.PostgresqlAdaptorChannel.PostgresqlAdaptorChanne l instance at 0xac64f0> Transaction: BEGIN Opening connection to the DB with conn.Dict: {'host': 'localhost', 'password': 'XXX', 'user': 'postgres', 'database': 'AUTHOR_BOOKS'} Opening channel <PostgresqlAdaptorLayer.PostgresqlAdaptorChannel.PostgresqlAdaptorChanne l instance at 0xac6310> (0xac6310) Called Evaluating: DROP TABLE WRITER Couldn't evaluate expression DROP TABLE WRITER. Reason: libpq.OperationalError:ERROR: Cannot drop table writer because other objects depend on it Use DROP ... CASCADE to drop the dependent objects too Transaction: ROLLBACK error: Couldn't evaluate expression DROP TABLE WRITER. Reason: libpq.OperationalError:ERROR: Cannot drop table writer because other objects depend on it Use DROP ... CASCADE to drop the dependent objects too Transaction: BEGIN Evaluating: DROP TABLE BOOK Transaction: COMMIT Transaction: BEGIN Evaluating: CREATE TABLE WRITER ( FIRST_NAME VARCHAR(40) , LAST_NAME VARCHAR(30) NOT NULL, FK_WRITER_ID INTEGER , AGE INTEGER , BIRTHDAY TIMESTAMP , ID INTEGER NOT NULL) Couldn't evaluate expression CREATE TABLE WRITER ( FIRST_NAME VARCHAR(40) , LAST_NAME VARCHAR(30) NOT NULL, FK_WRITER_ID INTEGER , AGE INTEGER , BIRTHDAY TIMESTAMP , ID INTEGER NOT NULL). Reason: libpq.OperationalError:ERROR: Relation 'writer' already exists Transaction: ROLLBACK error: Couldn't evaluate expression CREATE TABLE WRITER ( FIRST_NAME VARCHAR(40) , LAST_NAME VARCHAR(30) NOT NULL, FK_WRITER_ID INTEGER , AGE INTEGER , BIRTHDAY TIMESTAMP , ID INTEGER NOT NULL). Reason: libpq.OperationalError:ERROR: Relation 'writer' already exists Transaction: BEGIN Evaluating: CREATE TABLE BOOK ( PRICE NUMERIC(10,2) , title VARCHAR(40) NOT NULL, id INT NOT NULL, FK_WRITER_ID INTEGER ) Transaction: COMMIT Transaction: BEGIN Evaluating: ALTER TABLE WRITER ADD PRIMARY KEY (ID) Couldn't evaluate expression ALTER TABLE WRITER ADD PRIMARY KEY (ID). Reason: libpq.OperationalError:ERROR: ALTER TABLE / PRIMARY KEY multiple primary keys for table 'writer' are not allowed Transaction: ROLLBACK error: Couldn't evaluate expression ALTER TABLE WRITER ADD PRIMARY KEY (ID). Reason: libpq.OperationalError:ERROR: ALTER TABLE / PRIMARY KEY multiple primary keys for table 'writer' are not allowed Transaction: BEGIN Evaluating: ALTER TABLE BOOK ADD PRIMARY KEY (id) Transaction: COMMIT Transaction: BEGIN Evaluating: ALTER TABLE WRITER ADD CONSTRAINT pygmalion FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED Couldn't evaluate expression ALTER TABLE WRITER ADD CONSTRAINT pygmalion FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED. Reason: libpq.OperationalError:ERROR: constraint "pygmalion" already exists for relation "writer" Transaction: ROLLBACK error: Couldn't evaluate expression ALTER TABLE WRITER ADD CONSTRAINT pygmalion FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED. Reason: libpq.OperationalError:ERROR: constraint "pygmalion" already exists for relation "writer" Transaction: BEGIN Evaluating: ALTER TABLE BOOK ADD CONSTRAINT author FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED Transaction: COMMIT Transaction: BEGIN Evaluating: DROP SEQUENCE PK_SEQ_WRITER Transaction: COMMIT Transaction: BEGIN Evaluating: DROP SEQUENCE PK_SEQ_BOOK Transaction: COMMIT Transaction: BEGIN Evaluating: CREATE SEQUENCE PK_SEQ_WRITER START 1 Transaction: COMMIT Transaction: BEGIN Evaluating: CREATE SEQUENCE PK_SEQ_BOOK START 1 Transaction: COMMIT Transaction: BEGIN Evaluating: select nextval('PK_SEQ_WRITER') Evaluating: select nextval('PK_SEQ_WRITER') Evaluating: select nextval('PK_SEQ_WRITER') Transaction: COMMIT Transaction: BEGIN Evaluating: insert into WRITER (id, age, last_name,first_name,birthday,fk_writer_id) values (1,24,'Cleese','John','1939-10-27 08:31:15',null) Couldn't evaluate expression insert into WRITER (id, age, last_name,first_name,birthday,fk_writer_id) values (1,24,'Cleese','John','1939-10-27 08:31:15',null). Reason: libpq.OperationalError:ERROR: Cannot insert a duplicate key into unique index writer_pkey Transaction: ROLLBACK Traceback (most recent call last): File "test_EditingContext_Global.py", line 943, in ? errs = main(sys.argv) File "test_EditingContext_Global.py", line 927, in main if reinitDB_flag: reinitDB(); return File "test_EditingContext_Global.py", line 846, in reinitDB channel.evaluateExpression(sqlExpr) File "/Users/frdricva/Desktop/ModelingCore-0.8.1/Modeling/DatabaseAdaptors/ AbstractDBAPI2AdaptorLayer/AbstractDBAPI2AdaptorChannel.py", line 159, in evaluateExpression raise GeneralAdaptorException, msg Modeling.Adaptor.GeneralAdaptorException: Couldn't evaluate expression insert into WRITER (id, age, last_name,first_name,birthday,fk_writer_id) values (1,24,'Cleese','John','1939-10-27 08:31:15',null). Reason: libpq.OperationalError:ERROR: Cannot insert a duplicate key into unique index writer_pkey ================================= |
From: Sebastien B. <sbi...@us...> - 2003-01-28 13:08:09
|
> [...] Now first error is 'Type "datetime" does not exist', and I woul= d not > say it is coming from mx.DateTime which seems properly installed (from > interactive python command, can import mx and DateTime ...). is it so= mething > to do with pypgsql? Not at all, I identified the problem: the postgresql server that comes w/ MacOS X must be a brand new postgres v7.3 ; and pg has stopped supporting the datetime SQL data type since v2.3. Quoted from documentation v7.2: http://www.postgresql.org/docs/view.php?version=3D7.2&idoc=3D1&file= =3Ddatatype-datetime.html: << To ensure an upgrade path from versions of PostgreSQL earlier than 7.0, we recognize datetime (equivalent to timestamp) and timespan (equivalent to interval). These types are now restricted to having an implicit translation to timestamp and interval, and support for these will be removed in the next release of PostgreSQL (likely named 7.3). >> (I couldn't find any other hints in the postgresql official documentation or in the changelogs, but there's quite a lot of messages about that on various mailing-lists and newsgroups) > I am including all the output this time... Sure, that was helpful. All errors consecutive to: > error: Couldn't evaluate expression CREATE TABLE WRITER ( [...] are due to the fact that this table cannot be created. I need to examine the status of postgresql vs. mysql before making any changes ; in the meantime, I suggest that you edit tests/testPackages/AuthorBooks/model_AuthorBooks.xml and change that line: <attribute isClassProperty=3D'1' columnName=3D'BIRTHDAY' name=3D'bi= rthday' isRequired=3D'0' precision=3D'0' defaultValue=3D'None' externalType=3D'DATETIME' width=3D'0' scale=3D'0' type=3D'DateTime' displayLabel=3D'birthday'/> so that it says (change externalType DATETIME to TIMESTAMP): <attribute isClassProperty=3D'1' columnName=3D'BIRTHDAY' name=3D'bi= rthday' isRequired=3D'0' precision=3D'0' defaultValue=3D'None' externalType=3D'TIMESTAMP' width=3D'0' scale=3D'0' type=3D'DateTim= e' displayLabel=3D'birthday'/> I've no postgres v7.3 at hand but a v7.2.1, but it should be ok. I'll send more when I have details. -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-01-28 00:22:22
|
>> However I just had a look at the test_EditingContext_Global.py and it >> seems that I forgot to commit some changes that can make the >> following 2 >> tests fail: test_06_deleteRule_nullify(), and >> test_999_customSQLQuery() >> w/ pypgsql. I need to check that, I'll write back when I have some >> news >> about that. > > Ok, you can download the corrected test at: > http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/modeling/ > ProjectModeling/Modeling/tests/test_EditingContext_Global.py?rev=1.12 > > and it should be ok. > > -- Sebastien. OK, I have downloaded it. There are other errors (and they were there before, but I assumed they were related to previous one -- however there are less errors now ;-). I have also cretaed the 2nd db as instructed in the new README. Now first error is 'Type "datetime" does not exist', and I would not say it is coming from mx.DateTime which seems properly installed (from interactive python command, can import mx and DateTime ...). is it something to do with pypgsql? I am including all the output this time... Cheers, mario % python ./test_EditingContext_Global.py -r PostgresqlAdaptor: using pypgsql Creating PostgresqlAdaptorChannel <PostgresqlAdaptorLayer.PostgresqlAdaptorChannel.PostgresqlAdaptorChanne l instance at 0x881ac0> Transaction: BEGIN Opening connection to the DB with conn.Dict: {'host': 'localhost', 'password': 'YYYY', 'user': 'postgres', 'database': 'AUTHOR_BOOKS'} Opening channel <PostgresqlAdaptorLayer.PostgresqlAdaptorChannel.PostgresqlAdaptorChanne l instance at 0x887b90> (0x887b90) Called Evaluating: DROP TABLE WRITER Couldn't evaluate expression DROP TABLE WRITER. Reason: libpq.OperationalError:ERROR: table "writer" does not exist Transaction: ROLLBACK error: Couldn't evaluate expression DROP TABLE WRITER. Reason: libpq.OperationalError:ERROR: table "writer" does not exist Transaction: BEGIN Evaluating: DROP TABLE BOOK Transaction: COMMIT Transaction: BEGIN Evaluating: CREATE TABLE WRITER ( FIRST_NAME VARCHAR(40) , LAST_NAME VARCHAR(30) NOT NULL, FK_WRITER_ID INTEGER , AGE INTEGER , BIRTHDAY DATETIME , ID INTEGER NOT NULL) Couldn't evaluate expression CREATE TABLE WRITER ( FIRST_NAME VARCHAR(40) , LAST_NAME VARCHAR(30) NOT NULL, FK_WRITER_ID INTEGER , AGE INTEGER , BIRTHDAY DATETIME , ID INTEGER NOT NULL). Reason: libpq.OperationalError:ERROR: Type "datetime" does not exist Transaction: ROLLBACK error: Couldn't evaluate expression CREATE TABLE WRITER ( FIRST_NAME VARCHAR(40) , LAST_NAME VARCHAR(30) NOT NULL, FK_WRITER_ID INTEGER , AGE INTEGER , BIRTHDAY DATETIME , ID INTEGER NOT NULL). Reason: libpq.OperationalError:ERROR: Type "datetime" does not exist Transaction: BEGIN Evaluating: CREATE TABLE BOOK ( PRICE NUMERIC(10,2) , title VARCHAR(40) NOT NULL, id INT NOT NULL, FK_WRITER_ID INTEGER ) Transaction: COMMIT Transaction: BEGIN Evaluating: ALTER TABLE WRITER ADD PRIMARY KEY (ID) Couldn't evaluate expression ALTER TABLE WRITER ADD PRIMARY KEY (ID). Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: ROLLBACK error: Couldn't evaluate expression ALTER TABLE WRITER ADD PRIMARY KEY (ID). Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: BEGIN Evaluating: ALTER TABLE BOOK ADD PRIMARY KEY (id) Transaction: COMMIT Transaction: BEGIN Evaluating: ALTER TABLE WRITER ADD CONSTRAINT pygmalion FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED Couldn't evaluate expression ALTER TABLE WRITER ADD CONSTRAINT pygmalion FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED. Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: ROLLBACK error: Couldn't evaluate expression ALTER TABLE WRITER ADD CONSTRAINT pygmalion FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED. Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: BEGIN Evaluating: ALTER TABLE BOOK ADD CONSTRAINT author FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED Couldn't evaluate expression ALTER TABLE BOOK ADD CONSTRAINT author FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED. Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: ROLLBACK error: Couldn't evaluate expression ALTER TABLE BOOK ADD CONSTRAINT author FOREIGN KEY (FK_WRITER_ID) REFERENCES WRITER(ID) INITIALLY DEFERRED. Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: BEGIN Evaluating: DROP SEQUENCE PK_SEQ_WRITER Transaction: COMMIT Transaction: BEGIN Evaluating: DROP SEQUENCE PK_SEQ_BOOK Transaction: COMMIT Transaction: BEGIN Evaluating: CREATE SEQUENCE PK_SEQ_WRITER START 1 Transaction: COMMIT Transaction: BEGIN Evaluating: CREATE SEQUENCE PK_SEQ_BOOK START 1 Transaction: COMMIT Transaction: BEGIN Evaluating: select nextval('PK_SEQ_WRITER') Evaluating: select nextval('PK_SEQ_WRITER') Evaluating: select nextval('PK_SEQ_WRITER') Transaction: COMMIT Transaction: BEGIN Evaluating: insert into WRITER (id, age, last_name,first_name,birthday,fk_writer_id) values (1,24,'Cleese','John','1939-10-27 08:31:15',null) Couldn't evaluate expression insert into WRITER (id, age, last_name,first_name,birthday,fk_writer_id) values (1,24,'Cleese','John','1939-10-27 08:31:15',null). Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist Transaction: ROLLBACK Traceback (most recent call last): File "./test_EditingContext_Global.py", line 943, in ? errs = main(sys.argv) File "./test_EditingContext_Global.py", line 927, in main if reinitDB_flag: reinitDB(); return File "./test_EditingContext_Global.py", line 846, in reinitDB channel.evaluateExpression(sqlExpr) File "/Users/frdricva/Desktop/ModelingCore-0.8.1/Modeling/DatabaseAdaptors/ AbstractDBAPI2AdaptorLayer/AbstractDBAPI2AdaptorChannel.py", line 159, in evaluateExpression raise GeneralAdaptorException, msg Modeling.Adaptor.GeneralAdaptorException: Couldn't evaluate expression insert into WRITER (id, age, last_name,first_name,birthday,fk_writer_id) values (1,24,'Cleese','John','1939-10-27 08:31:15',null). Reason: libpq.OperationalError:ERROR: Relation "writer" does not exist |
From: Sebastien B. <sbi...@us...> - 2003-01-27 23:21:31
|
> However I just had a look at the test_EditingContext_Global.py and it > seems that I forgot to commit some changes that can make the following 2 > tests fail: test_06_deleteRule_nullify(), and test_999_customSQLQuery() > w/ pypgsql. I need to check that, I'll write back when I have some news > about that. Ok, you can download the corrected test at: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/modeling/ProjectModeling/Modeling/tests/test_EditingContext_Global.py?rev=1.12 and it should be ok. -- Sebastien. |
From: Sebastien B. <sbi...@us...> - 2003-01-27 22:20:55
|
Hi, > the tests for modeling 0.8.1 fail on this OSX machine. I am of course > using pypgsql as the preferred adaptor -- thanks! I try to follow the > sequence of commands in tests/README, and include the log below. Thanks for pointing out the tests/README file, I didn't remember me writing it! > Points to note: I created an empty db manually called "AUTHOR_BOOKS", > because there was another error previously (do not have the log for > it). > If this must be done manually, then this should be pointed out in the > installation doc. Yes, you're right, databases should be created before executing the script w/ the option '-r'. Two DBs should be created: "AUTHOR_BOOKS" (test_EditingContext_Global.py) and "STORE_EMPLOYEES" (test_EditingContext_Global_Inheritance.py) I'll update the README file. > Also, all other locations where name/password/otherinfo should be > specified -- in the XML models? These parameters do not need to be changed in the xml-models, you just need to change settings in file tests/test.cfg About errors you observed: > Evaluating: DROP TABLE WRITER > Couldn't evaluate expression DROP TABLE WRITER. Reason:=20=20 > libpq.OperationalError:ERROR: table "writer" does not exist These errors are due to the fact that the database has just been created, so it is empty; these statements just try to wipe out any tables, PK and FK constraints and sequences that may exists in the database, just before (re-) creating them, hence you see them fail. This should be changed to the simpler 'DROP DATABASE' statement. Errors during the '-r' phase can be safely ignored if the databases are correctly intialized ; so if the tests pass when the '-r' option is NOT active, everything is ok. However I just had a look at the test_EditingContext_Global.py and it seems that I forgot to commit some changes that can make the following 2 tests fail: test_06_deleteRule_nullify(), and test_999_customSQLQuery() w/ pypgsql. I need to check that, I'll write back when I have some news about that. Thanks for reporting, tell me if you get more errors. -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-01-27 20:03:12
|
Hello, the tests for modeling 0.8.1 fail on this OSX machine. I am of course using pypgsql as the preferred adaptor -- thanks! I try to follow the sequence of commands in tests/README, and include the log below. Points to note: I created an empty db manually called "AUTHOR_BOOKS", because there was another error previously (do not have the log for it). If this must be done manually, then this should be pointed out in the installation doc. Also, all other locations where name/password/otherinfo should be specified -- in the XML models? Many thanks for any help. Best regards, mario ========================================= % pwd /Users/frdricva/Desktop/ModelingCore-0.8.1/Modeling/tests % python run.py PostgresqlAdaptor: using pypgsql ---------------------------------------------------------------------- Ran 85 tests in 24.261s OK % python ./test_EditingContext_Global.py -r PostgresqlAdaptor: using pypgsql Creating PostgresqlAdaptorChannel <PostgresqlAdaptorLayer.PostgresqlAdaptorChannel.PostgresqlAdaptorChanne l instance at 0x88b7d0> Transaction: BEGIN Opening connection to the DB with conn.Dict: {'host': 'localhost', 'password': 'XXXX', 'user': 'postgres', 'database': 'AUTHOR_BOOKS'} Opening channel <PostgresqlAdaptorLayer.PostgresqlAdaptorChannel.PostgresqlAdaptorChanne l instance at 0x8e7810> (0x8e7810) Called Evaluating: DROP TABLE WRITER Couldn't evaluate expression DROP TABLE WRITER. Reason: libpq.OperationalError:ERROR: table "writer" does not exist Transaction: ROLLBACK error: Couldn't evaluate expression DROP TABLE WRITER. Reason: libpq.OperationalError:ERROR: table "writer" does not exist ... many other subsequent similar errors ... ========================================= |
From: Mario R. <ma...@ru...> - 2003-01-27 19:45:47
|
Thanks for the new core release. Pygresql seems to work just fine with=20= it. However there are some problems running the tests, which i will mail=20 about shortly. But before, please see my comments to some points below. ... >> Having to write XML instead of an excel file is not a major issue -- >> but it is less comfortable (but maybe the ZModel tool is even more >> comfortable?) > > Sure, using the ZModeler is much, much more comfortable than editing > an xml-file. But I understand installing and launching Zope just for > that can be a burden --I'm currently looking for an alternate > solution. Yes, but this should not be considered high priority. I feel it is more important to expose (to document) the interaction=20 between the different layers (tool - model description (xml) - python & sql).=20 Specifically, documentation for the XML format, documentation of the constraints on=20 the generated python classes and on the constraints on the generated sql=20 for the tables, and the utility or code to generate the python classes and the=20= sql to init and/or update the database with the corresponding tables. As I=20 understand, generation of python classes and sql statements is only available via=20 the Z tool at the moment? (Apologies beforehand if this info is in the doc already=20= and I have missed it...) Given this, it is quite likely that others may come up with such=20 tools... For example it could be quite simple to, borrowing the CSV idea,=20 generate your XML from a spreadsheet... opening up the possibility to use any old spreadsheet program as a basic model editor. Or, if the XML will not map nicely to CSV, some other cooked editor for this XML doctype. >> For large queries, can either of the frameworks do paging? In >> Modeling you can get the count, but, if the count is big, can you >> specify to fetch N objects at a time (respecting the original >> qualifiers, and sort, if any) ? >> >> Can you sort? (This is done in the store or in the db (much faster)) > > (I've no idea of what are the MiddleKit's capabilities on these=20 > points) To my knowledge there is absolutely no such chunking management in MiddleKit, but this question was no longer in the context of a=20 comparison between the two. Such chunking management becomes very useful when=20 user-interfacing to large sets of data. As for sort, MK does it on the DB side, by specifying sql order by=20 clauses. However do not know if that implies refetching all the objects each=20 time (imagine a typical interface showing a list of items, and clicking on a column=20 heading to sort those items...) , or simply merging the sort order with the=20 already loaded objects. > For both questions, wrt. the Modeling, answer is: not yet. > > 1. Work on paging has not began yet, Again, this would only be needed for large datasets... so later. But it=20= is good to foresee as many limitations as possible, and their possible=20 solutions. > 2. Modeling.SortOrdering is specifically designed for sorting=20 > purpose, > to be used along with Qualifiers in FetchSpecifications, but for > the moment it can only be used for in-memory sorts. This could be an optimization later -- instead of sorting on the store=20= it might be faster to do a db query to get only the sort info (get only the id?)=20= and then iterate on that sort order using the data in the store (or getting it when=20 missing). >> I must say that this is a problem! I have not yet managed to install >> Modeling because i could get neither pgdb nor psycopg installed on my >> OSX machine, as for some reason they will not compile (am following = it >> up on the psycopg list). PyPgSQL is installed however... how >> difficult is it to add support to another adaptor such as pypgsgl? > > Not *that* difficult ;) I released 0.8.1 today, adding support for > pypgsql. BTW, I am *very* interesting in hearing from your and=20 > others' > experiences with the modeling and MacOS X (and windows as well) = since > the only OS I have at hand is linux. Great, thanks a lot for adding support for PyPgSQL. More soon... \-; ... > Being able to fetch raw rows that can be later converted to objects=20= > is > indeed on the todo list (no ETA, still: it highly depends on users' > requests :). OK... > [...] >> So you mean you would have an objectstore cached in each user = session? >> (Very memory expensive?) >> Or a shared application-level object store that all users that fetch >> would go through? (To modify they will have to refetch into a new=20 >> store? >> Or this is where the new feature you link to about nested contexts >> comes in?) > > Sure, both designs are possible, that's an architectural choice. Back=20= > to > your questions: > > 1. [an ObjectStore in each session] yes, since each object store=20 > holds > its own copies of objects, this can be memory expensive. Future > features related to that topic includes: > > - trying to maintain the number of objects per ObjectStore = under > a given, user-defined, limit (I say ``trying'', because when > all objects in an object store are modified it is definitely > not possible to reduce the memory footprint) This can be very important for web applications with many open user sessions, all possibly accessing different datasets. > - having a shared ObjectStore where mostly-read-only objects > lives and are shared by all other object store. This is = called > a Shared Editing Context in the EOF. Could be powerful in combination with previous feature -- all requests for view-only data are serviced by a common read-only object store,=20 while all modify-data requests are serviced by a (short-lived) user store that could be independent from read-only store, or could be a nested store to it. The cost of refetching objects that are to be modified is much=20= less critical that refetching objects for reading -- ***most*** of the time data is browsed for reading. > 2. Now what if you decide to have a shared app.-level object store? > The first consequence is that any modifications made by one = users, > including non-committed ones, will be seen by others ; by the = way, > modifications are made as usual on objects and do not require any > re-fetch. Yes, this would not be the correct way to go. >> Or this is where the new feature you link to about nested contexts >> comes in? > > It will, to a certain extend. I suppose you're thinking of having a > shared (root) object store, with one child (nested) object store > e.g. created for each user when modifications are about to be done.=20= > In > this configuration, changes will not be seen by other users until > changes made in the child are committed back to the parent (the > parent/child configuration will really bring transactional abilities > to the sets of objects). Last, fetching objects in a child object > store that were already fetched in a parent obj.store is just as > expensive as refetching already fetched objects in a single=20 > obj.store: > definitely not as expensive as the first fetch, as shown in my > previous message. Again, cost of fetching is (much) less important here. In addition,=20 normally data is modified in small units, typically 1 object at a time. But when=20 browsing, it is common to request fetches of 1000s of objects... > However, even in the still-uncommited changes, the framework lacks > an important feature which is likely to be the next thing I'll work > on: the ability to broadcast the changes committed on one object=20 > store > to the others (in fact, the notifications are already made, but they > are not properly handled yet). Yes, this would be necessary. Not sure how the broadcast model will work (may not always know who the end clients are -- data on a client web=20 page will never know about these changes even if you broadcast). But,=20 possibly a feasible simple strategy could be that each store will keep a copy of=20= the original object as it received it. When committing a modify it compares=20= the original object(s) being modified to those in the db, if still=20 identical, locks the objects and commits the changes, otherwise returns an error "object has changed since". This behaviour could be application specific, i.e. in=20 some cases you want to go ahead and modify anyway, and in others you want to do something else. Thus, this could be a store option, with default=20 being the error. In combination with this, other stores may still be notified of=20 changes, and it is up to them (or their configuration) whether to reload the modified=20 objects, or to ignore the broadcasts. Otherwise, on each fetch the framework=20 could check if any changes have taken place (may be expensive) for the=20 concerned objects, and reload them automatically. This has the advantage of=20 covering those cases when changes to data in the db are done by some way beyond=20= the framework, and thus the framework can never be sure to know about it. > Last and although this is not directly connected to that topic, I'd=20 > like > to make an other point more precise: the framework does not implement > yet any locking, either optimistic or pessimistic, on DB's rows. This > means that if other processes modify a row in the DB, an ObjectStore > will not notice the changes and will possibly override them if the > corresponding object has been modified in the meantime. Yes, but locking objects should be very short lived -- only the time to=20= commit any changes, as mentioned above. This also makes it much less important, and only potentially a problem when two processes try to modify the same object at the same time. >> How about POM? >> Which could be interpreted as PyObjectManager, with the additional=20 >> merit >> of retaining an implicit credit to Apple, as well as to its >> non-american author... >> never know, it might one day become Le Big Pomme :-) > > Very nice proposition, I'll think of it. Really! Ah, very good. mario > Best regards, > > -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-01-26 19:51:27
|
> Apologies for my earlier comment w.r.t. Zope -- I had only read your > doc, and the prominence of the ZModelization tool made me jump to a > wrong conclusion. What about if the doc be re-organized a little to > separete "model concepts (ER)" from "your data description language" > (xml) and from "how the ddl is used by the framework" and from > "existing tools to manipulate the ddl"? I know, there's plenty to do > already ;) Thanks for your comments, I realize that the documentation is quite obscure on that point. I'll take your suggestions into account asap. > Having to write XML instead of an excel file is not a major issue --= =20=20 > but it is less comfortable (but maybe the ZModel tool is even more=20= =20 > comfortable?) Sure, using the ZModeler is much, much more comfortable than editing an xml-file. But I understand installing and launching Zope just for that can be a burden --I'm currently looking for an alternate solution. > For large queries, can either of the frameworks do paging? In > Modeling you can get the count, but, if the count is big, can you > specify to fetch N objects at a time (respecting the original > qualifiers, and sort, if any) ? >=20 > Can you sort? (This is done in the store or in the db (much faster)) (I've no idea of what are the MiddleKit's capabilities on these point= s) For both questions, wrt. the Modeling, answer is: not yet. 1. Work on paging has not began yet, 2. Modeling.SortOrdering is specifically designed for sorting purpose, to be used along with Qualifiers in FetchSpecifications, but for the moment it can only be used for in-memory sorts. > I must say that this is a problem! I have not yet managed to install > Modeling because i could get neither pgdb nor psycopg installed on my > OSX machine, as for some reason they will not compile (am following it > up on the psycopg list). PyPgSQL is installed however... how > difficult is it to add support to another adaptor such as pypgsgl? Not *that* difficult ;) I released 0.8.1 today, adding support for pypgsql. BTW, I am *very* interesting in hearing from your and others' experiences with the modeling and MacOS X (and windows as well) since the only OS I have at hand is linux. > BTW, the DEPENDENCIES file in 0.8 is a null reference (to another=20= =20 > non-existant file). Sorry, my fault: it was in the cvs-tree, I just forgot to include it in the distributed tarball --fixed in v0.8.1. > Is the time spent in converting the response tuples to initialized=20= =20 > objects? > (If so, could also do this conversion "only when needed"?) > Or in doing consistency checking? (Could provide an option to switch > this off when fetching?) Anyway, optimizations would be for later but= it > is good to know where and what could be speeded up. Well, I do not have in mind the exact distribution, but I can say that most of the time is spent in converting raw rows to objects, right. Being able to fetch raw rows that can be later converted to objects is indeed on the todo list (no ETA, still: it highly depends on users' requests :). [...] > So you mean you would have an objectstore cached in each user session? > (Very memory expensive?) > Or a shared application-level object store that all users that fetch > would go through? (To modify they will have to refetch into a new sto= re? > Or this is where the new feature you link to about nested contexts=20= =20 > comes in?) Sure, both designs are possible, that's an architectural choice. Back to your questions: 1. [an ObjectStore in each session] yes, since each object store holds its own copies of objects, this can be memory expensive. Future features related to that topic includes: - trying to maintain the number of objects per ObjectStore under a given, user-defined, limit (I say ``trying'', because when all objects in an object store are modified it is definitely not possible to reduce the memory footprint) =20=20=20=20=20=20=20 - having a shared ObjectStore where mostly-read-only objects lives and are shared by all other object store. This is called a Shared Editing Context in the EOF. 2. Now what if you decide to have a shared app.-level object store? The first consequence is that any modifications made by one users, including non-committed ones, will be seen by others ; by the way, modifications are made as usual on objects and do not require any re-fetch. > Or this is where the new feature you link to about nested contexts= =20=20 > comes in? It will, to a certain extend. I suppose you're thinking of having a shared (root) object store, with one child (nested) object store e.g. created for each user when modifications are about to be done. In this configuration, changes will not be seen by other users until changes made in the child are committed back to the parent (the parent/child configuration will really bring transactional abilities to the sets of objects). Last, fetching objects in a child object store that were already fetched in a parent obj.store is just as expensive as refetching already fetched objects in a single obj.store: definitely not as expensive as the first fetch, as shown in my previous message. However, even in the still-uncommited changes, the framework lacks an important feature which is likely to be the next thing I'll work on: the ability to broadcast the changes committed on one object store to the others (in fact, the notifications are already made, but they are not properly handled yet). Last and although this is not directly connected to that topic, I'd like to make an other point more precise: the framework does not implement yet any locking, either optimistic or pessimistic, on DB's rows. This means that if other processes modify a row in the DB, an ObjectStore will not notice the changes and will possibly override them if the corresponding object has been modified in the meantime. > How about POM? > Which could be interpreted as PyObjectManager, with the additional me= rit > of retaining an implicit credit to Apple, as well as to its=20=20 > non-american author... > never know, it might one day become Le Big Pomme :-) Very nice proposition, I'll think of it. Really! Best regards, -- S=E9bastien. |
From: Sebastien B. <sbi...@us...> - 2003-01-26 19:47:35
|
Hi all, The Modeling's core v0.8.1 has been released today. It includes support for the python postgresql adaptor PyGreSQL, plus an additional mechanism one can use to choose which python pg-adaptor should be preferred by the Modeling.DatabaseAdaptors.PostgresqlAdaptorLayer. More explicitely, as stated in docstrings of PostgresqlAdaptorLayer.PostgresqlAdaptor, the modules are searched in the following order: psycopg, then pgdb, then PyGreSQL. If you have more than one of these modules installed, you may want to decide which one you be used. The environment variable 'PREFERRED_PYTHON_POSTGRESQL_ADAPTOR' is now checked by the framework, and, if set to either 'psycopg', 'pgdb' or 'pygresql', the corresponding python postgresql module will be preferred over the others. Full changelog is dropped at the end of this message. Regards, -- S=E9bastien. ------------------------------------------------------------------------ 0.8.1 ----- * Added python postgresql module PyGreSQL to the list of modules that the PostgresqlAdaptorLayer can use. The PostgresqlAdaptorLayer now checks the environment variable 'PREFERRED_PYTHON_POSTGRESQL_ADAPTOR' * Fixed MANIFEST.in: forgot to include top-level files, mainly INSTALL and DEPENDENCIES. Thanks to Mario <ma...@ru...> for reporting the pb. * Fixed KeyValueCoding.valueForKeyPath: a 'map' statement was missing its second argument * misc.: GlobalID's docstrings updated. ------------------------------------------------------------------------ |
From: Mario R. <ma...@ru...> - 2003-01-26 00:48:13
|
Hi, > First at all, and answering your previous message, the Modeling > Framework is NOT tightly dependent of/linked to Zope ; only the = Modeler > needs Zope. Apologies for my earlier comment w.r.t. Zope -- I had only read your =20 doc, and the prominence of the ZModelization tool made me jump to a wrong =20 conclusion. What about if the doc be re-organized a little to separete "model =20 concepts (ER)" from "your data description language" (xml) and from "how the ddl is =20= used by the framework" and from "existing tools to manipulate the ddl"? I know, =20 there's plenty to do already ;) > The models used by the framework are xml-models, so it is > possible to write them without having access to the ZModeler (just the > way you write csv-models for use by MiddleKit, using your favorite > editor ; that's not *that* straightforward, though). Having to write XML instead of an excel file is not a major issue -- =20 but it is less comfortable (but maybe the ZModel tool is even more =20 comfortable?) > Comparison between the two frameworks is not an easy task because I =20= > do > not have real-world experience with the MiddleKit Framework. = However, > examining the documentation, I can try to highlight some facts: Thanks a lot for the comprehensive comparison. As I have only played =20= with MK, I would not be in any better position to comment on MK itself. (This comparison could be added as an faq?) ... > - Regarding the frameworks' capabilities: > > - the modeling has a built-in query langage which is =20 > automatically > turned to SQL query > http://modeling.sourceforge.net/UserGuide/ec-fetch-object.html > > -- I could not find something like this in MiddleKit (except, =20= > of > course, fetch all data then python-filter the results) Note: > marked as FUTURE in MiddleKit.Run.ObjectStor In MK you can specify clauses, which as i understand are straight SQL clauses, and are passed appended to the generated SELECT statement. However, I do not think MK has the "fetch only when needed" that you mention for Modeling. For large queries, can either of the frameworks do paging? In Modeling you can get the count, but, if the count is big, can you =20 specify to fetch N objects at a time (respecting the original qualifiers, and =20 sort, if any) ? Can you sort? (This is done in the store or in the db (much faster)) > - the modeling provides all ACID properties ; I cant tell for > MiddleKit > (ACID: Atomicity, Consistency, Isolation, and Durability see > e.g. http://www.zope.org/Wikis/ZODB/guide/node12.html for =20 > details) Can't say it does not do it for sure. > - Misc. > > - installation: it seems that MiddleKit can be easily installed = ; > installing the Modeling requires quite a bunch of third-party > packages (see the file DEPENDENCIES), including the WebWare's > Cheetah ;) I must say that this is a problem! I have not yet managed to install =20 Modeling because i could get neither pgdb nor psycopg installed on my OSX =20 machine, as for some reason they will not compile (am following it up on the =20 psycopg list). PyPgSQL is installed however... how difficult is it to add support to =20= another adaptor such as pypgsgl? BTW, the DEPENDENCIES file in 0.8 is a null reference (to another =20 non-existant file). > That's exactly the kind of things I'd like to add to the documentation > online. These are the results of some quick tests I just made so that > you can figure out: > > [Computer: AMD Athlon XP 1.5Ghz, 512 Mo RAM, HDD on IDE] > Tests made with postgresql and python running on the same machine > > On a sample table having 21 attributes (mostly varchars 255 or so) =20= > and > 1423 rows, these are the raw times of fetching all the rows (select = * > from TABLE): > > python2.1 | python2.2 > = -----------------+------------------- > w/ Modeling (using psycopg): ~6.7s / 0.75s | ~5.4s. / ~0.60s > w/ psycopg: (execute+fetchall): ~0.11s/ 0.11s | ~0.11s / ~0.11s > > Note: the first figure corresponds to the first fetch, the =20 > second > one to subsequent ones. Excellent, thanks. Although the difference is big -- always the price to pay with such logical layering. Is the time spent in converting the response tuples to initialized =20 objects? (If so, could also do this conversion "only when needed"?) Or in doing consistency checking? (Could provide an option to switch this off when fetching?) Anyway, optimizations would be for later but it is good to know where and what could be speeded up. > Remember that the modeling returns fully initialized objects (just > like the MiddleKit does) and that psycopg returns raw rows. = However, > the factor is quite a big one and you see a big difference. By the > way you'll notice that when objects are already fetched (even using > different qualifiers), the framework detects it and fetches > significantly faster (nb: any changes previously made to an object > are not discarded). Very nice. > One big conclusion out of this is that is definitely not a good = idea > to use the modeling framework in e.g. a cgi script: initialization > and fetch times would be too expensive. But desktop applications or > web-app. build on any application server (if it provides a = mechanism > for dealing with sessions) do not suffer these init. times, and can > largely benefit from the framework. So you mean you would have an objectstore cached in each user session? (Very memory expensive?) Or a shared application-level object store that all users that fetch would go through? (To modify they will have to refetch into a new store? Or this is where the new feature you link to about nested contexts =20 comes in?) > Last, 2 applications using the modeling have been successfully put in > production for three months. One is a zope-based web app. dedicated to > the supervision of doctors in duty for the french emergency services =20= > and > associated services, the other one is python-gtk-based app. for > bookshops' stock and sales management (size of the currently installed > DB: ~160Mo). More details about these projects will be put online in > the next weeks, hopefully. Good to know. > Important ``detail'': they both use a non-released-yet version > of the framework, v0.9, planned to be released before the > 15/02/03, or sooner if I find time or if you insist! This > release will be made of bug-fixes and enhancements, including > Feature Request 617997 > =20 > [https://sourceforge.net/tracker/=20 > index.php?func=3Ddetail&aid=3D617997&group_id=3D58935&atid=3D489338] I am in no particular hurry. I am trying to evaluate what tools =20 would`be best for a project I may be doing... >> A suggestion, if I may. How about an easily recognizable and short =20= >> name, >> such as PEOF (python EOF) ? > > Well, this is exactly where I feel touchy... I know the name is not a > good one, so I'm opened to any suggestions, but I do not know if it is > ok to call it python-EOF since EOF is a trademark. How about POM? Which could be interpreted as PyObjectManager, with the additional merit of retaining an implicit credit to Apple, as well as to its =20 non-american author... never know, it might one day become Le Big Pomme :-) > -- S=E9bastien. > Best regards, mario |
From: Sebastien B. <sbi...@us...> - 2003-01-25 18:21:57
|
Hi, Mario <ma...@ru...> wrote: > I am aware (and experimented with) another such layer for python, > MiddleKit included in Webware (http://webware.sourceforge.net/) but > may be used outside of it. It also is inspired from WebObjects' EOF, > but seems to take a more pragmatic approach and providing a CSV model > definition format from which tables and classes are generated. Are you > familiar with this package? How would these two frameworks compare? I must say that I've been told of the MiddleKit, however I never actually tried it --my comments will be derived from the online documentation and after a quick look at the source code. Since you are experimented with the MiddleKit, please correct me when I'm wrong. First at all, and answering your previous message, the Modeling Framework is NOT tightly dependent of/linked to Zope ; only the Modeler needs Zope. The models used by the framework are xml-models, so it is possible to write them without having access to the ZModeler (just the way you write csv-models for use by MiddleKit, using your favorite editor ; that's not *that* straightforward, though). Comparison between the two frameworks is not an easy task because I do not have real-world experience with the MiddleKit Framework. However, examining the documentation, I can try to highlight some facts: - both are indeed inspired by WO's EOF (the modeling currently sticks to the same API than the EOF's) - Regarding models: - MiddleKit seems to support class inheritance, and so does the Modeling it (in a limited manner, though: only horizontal mapping is supported for the moment being, I dont know what the situation is in MK) - types 'enum', 'bool' are not supported out-of-the-box by the modeling framework, but can easily be expressed by custom business-logic. Both do not support binary data types/BLOB/etc. - both can generate SQL code for DB-schemas and python code ready to be used - both do not support many-to-many relationships automatically - Regarding the frameworks' capabilities: - the modeling has a built-in query langage which is automatically turned to SQL query http://modeling.sourceforge.net/UserGuide/ec-fetch-object.html -- I could not find something like this in MiddleKit (except, of course, fetch all data then python-filter the results) Note: marked as FUTURE in MiddleKit.Run.ObjectStore - the modeling provides all ACID properties ; I cant tell for=20 MiddleKit (ACID: Atomicity, Consistency, Isolation, and Durability see e.g. http://www.zope.org/Wikis/ZODB/guide/node12.html for deta= ils) - the 2 object-relational mapping cores (driven by ObjectStores in both cases) are automatically triggered in both cases, maintaining the details of objects persistence ``largely behind the scenes'' (quoted from MK's doc.) - Misc. - installation: it seems that MiddleKit can be easily installed ; installing the Modeling requires quite a bunch of third-party packages (see the file DEPENDENCIES), including the WebWare's Cheetah ;) - Supported DB: MiddleKit: MySQL, MS SQL Server Modeling: mysql, postgresql =20=20=20=20=20 Mario> Any ideas of performance? For example, on a fetch of a 100, 1000, Mario> or so objects, what is the difference between going thru DBAPI Mario> directly, and going through this bridge?Have you ever had the Mario> opportunity to get a feeling for this in some way? That's exactly the kind of things I'd like to add to the documentation online. These are the results of some quick tests I just made so that you can figure out: [Computer: AMD Athlon XP 1.5Ghz, 512 Mo RAM, HDD on IDE] Tests made with postgresql and python running on the same machine =20=20=20 On a sample table having 21 attributes (mostly varchars 255 or so) and 1423 rows, these are the raw times of fetching all the rows (select * from TABLE): python2.1 | python2.2 -----------------+------------------- w/ Modeling (using psycopg): ~6.7s / 0.75s | ~5.4s. / ~0.60s w/ psycopg: (execute+fetchall): ~0.11s/ 0.11s | ~0.11s / ~0.11s Note: the first figure corresponds to the first fetch, the second one to subsequent ones. Remember that the modeling returns fully initialized objects (just like the MiddleKit does) and that psycopg returns raw rows. However, the factor is quite a big one and you see a big difference. By the way you'll notice that when objects are already fetched (even using different qualifiers), the framework detects it and fetches significantly faster (nb: any changes previously made to an object are not discarded). One big conclusion out of this is that is definitely not a good idea to use the modeling framework in e.g. a cgi script: initialization and fetch times would be too expensive. But desktop applications or web-app. build on any application server (if it provides a mechanism for dealing with sessions) do not suffer these init. times, and can largely benefit from the framework. Last, 2 applications using the modeling have been successfully put in production for three months. One is a zope-based web app. dedicated to the supervision of doctors in duty for the french emergency services and associated services, the other one is python-gtk-based app. for bookshops' stock and sales management (size of the currently installed DB: ~160Mo). More details about these projects will be put online in the next weeks, hopefully. Important ``detail'': they both use a non-released-yet version of the framework, v0.9, planned to be released before the 15/02/03, or sooner if I find time or if you insist! This release will be made of bug-fixes and enhancements, including Feature Request 617997 [https://sourceforge.net/tracker/index.php?func=3Ddetail&aid=3D= 617997&group_id=3D58935&atid=3D489338] > A suggestion, if I may. How about an easily recognizable and short na= me, > such as PEOF (python EOF) ? Well, this is exactly where I feel touchy... I know the name is not a good one, so I'm opened to any suggestions, but I do not know if it is ok to call it python-EOF since EOF is a trademark.=20 =20=20=20=20=20=20=20=20 -- S=E9bastien. |
From: Mario R. <ma...@ru...> - 2003-01-24 10:57:40
|
Hello S=E9bastien, all, Bravo for a very clean and elegant package. I have only read through your documentation -- I am convinced that this kind of middle layer is the way to go. If I may, I have a couple of questions: I am aware (and experimented with) another such layer for python, MiddleKit included in Webware (http://webware.sourceforge.net/) but may be used outside of it. It also is inspired from WebObjects' EOF, but=20 seems to take a more pragmatic approach and providing a CSV model definition=20 format from which tables and classes are generated. Are you familiar with this=20= package? How would these two frameworks compare? Any ideas of performance? For example, on a fetch of a 100, 1000, or so objects, what is the difference between going thru DBAPI directly, and going through this bridge?Have you ever had the opportunity to get a feeling for this in some way? A suggestion, if I may. How about an easily recognizable and short name, such as PEOF (python EOF) ? Best regards, Mario Ruggier |
From: Mario R. <ma...@ru...> - 2003-01-20 01:19:15
|
Hi, fyi, in case you might not be aware of it... an interesting bridge for=20= postgres (in particular, but not only) was announced a few days ago (on daily=20 python-url, http://www.pythonware.com/daily/index.htm 2003-01-14, titled "Modeling Framework, an object-relational bridge for Python", by S=E9bastien=20 Bigaret). It is currently kind of bolted to Zope, but would be very interesting to have it also available for twisted. mario On lundi, jan 20, 2003, at 00:21 Europe/Amsterdam, Mike Thompson wrote: > >> I'm considering using ZEO/ZODB to backend a Twisted server. I.e. as >> transactions arrive at the server, they are stored thru ZEO into a=20 >> ZODB >> database. > >> Is this combination (ZEO/Twisted) known to work/not-work? Any > info/pointers >> appreciated. > >> Yours drowning in technology, >> Mike. > > Okay. I think I've answered my question. I've found this Feb-'02 post=20= > in the > Zope3-dev archives ... > > =20 > http://lists.zope.org/pipermail/zope3-dev/2002-February/000516.html > > Seems to say that Twisted and ZEO do not mix. $@#$!@* > > This post is almost a year old, so I'm wondering if anything has=20 > anything > happened since it was written. Has the release of Standalone ZODB=20 > prompted > something, said he hopefully. > > Is the potentially blocking nature of "ZEO commits" the problem? Or=20= > is the > handling of the "ZEO cache invalidation", er, messages? Or is it=20 > something > else entirely? > > Thanks, > Mike. > > > > _______________________________________________ > Twisted-Python mailing list > Twi...@tw... > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python |
From: Sebastien B. <sbi...@us...> - 2003-01-14 11:23:48
|
Hi, The Modeling Framework, v0.8, and the Notification framework, v0.4, were released today. Both are now under the GNU General Public License. Most important changes in these releases: - Modeling package: EditingContext's new feature: it can now examine updated and inserted objects to detect the newly created objects in relation, so that they are in turn inserted into the EditingContext. CustomObject (resp.) ClassDescription has a new method to support this feature: propagateInsertionWithEditingContext (resp. propagateInsertionForObject) EC.insertObject() was consequently changed to issue a warning instead of raising ValueError when it is called with an already registered object AND EditingContext.propagatesInsertionForRelatedObjects() is true This feature is related to the natural way the ZODB behaves, where the statement 'obj.foobar=obj2' makes obj2 persistent within the zodb when obj is already persistent. - ZModelizationTool: - Feature added: it is now possible to create a relationship (and its inverse) in one single step --includes: automatic generation of the foreign key, if requested - Database schema generation: Added a button to show (as opposed to: execute) the SQL statements that would be executed by the other button 'Do it!' All other changes are bug-fixes --see files CHANGES in their respective package. An other release in planned within a month ; it will mainly address RFE#617997 (https://sourceforge.net/tracker/?group_id=58935&atid=489338). It will make it possible to make changes in a subset of a set of objects and then commit all changes, or discard them, as a whole. In other words, it will be possible to think ``transactional'' at the OO-level. ETA: ~Feb.15,2003. -- Sebastien. |
From: Sebastien B. <sbi...@us...> - 2002-09-19 18:25:07
|
Hi, Releases 0.7a5 and 0.7a5-1 reveal a very annoying bug when using the ZModelizationTool: changes are not persistent --i.e.: they are LOST-- after zope restarts (or after refresh). To correct this, please get the last CVS version of Entity.py (v1.7), or apply the patch included below. Additionally, if you update your ZModelizationTool from the CVS head, you'll get a nice new feature: the ability to create a relationship and its inverse in one single click (see a model's property page) --includes: generation of the foreign key, if requested. -- Sebastien. |
From: Sebastien B. <sbi...@us...> - 2002-09-12 16:50:07
|
Hi, > A new release is available at http://modeling.sf.net. ...and its installation procedure is buggy. I forgot to include the cheetah-templates within the source distribution, which led the setup.py to fail unconditionally (even if the corresponding/generated python files are there). Both problems are corrected in v0.7a5-1. If you already downloaded the 0.7a5, you can either comment the line that says 'sys.exit(1)' in the setup.py, or get the new one from: http://cvs.sf.net/cgi-bin/viewcvs.cgi/modeling/?only_with_tag=release-0_7a5-1 -- Sebastien. |
From: Sebastien B. <Seb...@in...> - 2002-09-11 22:03:11
|
Hi, A new release is available at http://modeling.sf.net. The major enhancements and changes are: - Added the ability to check the model against the most common errors made when designing a model. This feature is available through the ZModelizationTool. - Added support for MySQL, and the PostgreSQL db-adaptor can now use another python DB-API2.0-compliant module: psycopg --this one is now preferred to the module 'pgdb', when both are available (I noticed that I got better performance with the former than with the latter). An AbstractDBAPI2AdaptorLayer is also distributed to make it easier to write new adaptors for databases for which a python DB-API2.0-compliant module is available. - The DatabaseAdaptors, previously distributed separately, are now shipped with the core package. Its location changed as well: they are now available under the package 'Modeling.DatabaseAdaptors' (was: package 'DatabaseAdaptors') --see also: specific instructions for upgrading, below. - Added support for Date objects - Added full support for python2.2 - lots of bug fixes! --> See the packages' files 'CHANGES' for more details Instructions: Upgrading from 0.7a3 to 0.7a5 ------------ * Delete $prefix/lib/python2.1/site-packages/DatabaseAdaptors * If you are using the ZModelizationTool to design your models: - save your models - delete and recreate your instance of the ZModelizationTool - reload your models from the previously saved xml files. * Install as usual: the core (0.7a5) and the NotificationFramework (0.3) are installed via the python-distutils, while the ZModelizationTool, available in the ZModeling package, is installed in the Zope/Products directory. -- Sebastien. |
From: Sebastien B. <sbi...@us...> - 2002-08-21 20:52:20
|
Hi, Two critical bugs were discovered in v0.7a3 and are now solved [bugs #598164 and #598167]. Both were defeating one of the major functionalities of EditingContext: insuring the unicity of any object in a EditingContext. (see http://modeling.sourceforge.net/UserGuide/editing-context.html, paragraph 4.1). This leads the framework to mis-behave in a very various situations as soon as inheritance is involved ; additionally the framework can, in some situations, also mis-behave even when no inheritance can be found in the model (this problem occurs when a primary-key value is promoted from 'int' to 'long int' by the underlying database adaptor). Users are strongly encouraged to update their copy of the framework from the CVS repository --until a new release is made. -- Sebastien. |