Re: [Modeling-users] (LONG) Other OR-bridges?
Status: Abandoned
Brought to you by:
sbigaret
|
From: Sebastien B. <sbi...@us...> - 2003-01-28 23:19:50
|
Hi, Back to your comments: > > Sure, using the ZModeler is much, much more comfortable than > > editing an xml-file. But I understand installing and launching > > Zope just for that can be a burden --I'm currently looking for an > > alternate solution. >=20 > Yes, but this should not be considered high priority. I feel it is > more important to expose (to document) the interaction between the > different layers (tool - model description (xml) - python & sql). > Specifically, documentation for the XML format, documentation of the > constraints on the generated python classes and on the constraints on > the generated sql for the tables, and the utility or code to generate > the python classes and the sql to init and/or update the database with > the corresponding tables. As I understand, generation of python > classes and sql statements is only available via the Z tool at the > moment? (Apologies beforehand if this info is in the doc already and I > have missed it...) No need to apology, you're pointing out the right things! - model design, model validation, generation of databases and schemas, generation of python classes are available in the ZModeler. - Updating a database schema after changes made on a model is not available --that feature would be in the tool, not in the core. Reverse-engineering an existing database is not available, it's been on the todo list for a certain time. In a previous mail you were speaking of the prominence of the zmodeler and suggesting a different organization of the documentation. This should be done, for sure. I've begun writing a tutorial, hence the pb of making the code/sql generation available apart from zope was identified and I should be able to release the scripts soon. - Documentation for the xml format: in a way, there is some, even it's not that obvious... In fact, section 2.2 in the User's Guide describes each element that can be found in a xmlmodel (sample xml models are in testPackages.AuthorBooks and StoreEmployees). > Given this, it is quite likely that others may come up with such > tools... > For example it could be quite simple to, borrowing the CSV idea, > generate your XML from a spreadsheet... opening up the possibility to > use any old spreadsheet program as a basic model editor. Or, if the > XML will not map nicely to CSV, some other cooked editor for this XML > doctype. I can't see precisely how to design a csv that maps nicely to the xml, but I'm open to suggestions! Same, anyone feeling like coding some tools will be welcome ;) and in that case, I'll start a dev-branch and share the unreleased-coz'-unfinished code I already have. [...] > > 2. Modeling.SortOrdering is specifically designed for sorting=20 > > purpose, > > to be used along with Qualifiers in FetchSpecifications, but f= or > > the moment it can only be used for in-memory sorts. >=20 > This could be an optimization later -- instead of sorting on the store > it might be faster to do a db query to get only the sort info (get > only the id?) and then iterate on that sort order using the data in > the store (or getting it when missing). That's the idea. The only thing that needs to be done is to generate SQL code from a SortOrdering object. > > [about] having a shared ObjectStore where mostly-read-only objects > > lives and are shared by all other object store. This is ca= lled > > a Shared Editing Context in the EOF. >=20 > Could be powerful in combination with previous feature -- all requests > for view-only data are serviced by a common read-only object store, > while all modify-data requests are serviced by a (short-lived) user > store that could be independent from read-only store, or could be a > nested store to it. A nested store to a read-only object store? That's a very, very good idea, i didn't think about that --must be still too impregnated with the way the EOF handles that. This would make it easy to update mostly-read-objects while keeping the memory footprint low. I just dropped a note in the TODO list about that, thanks! > Again, cost of fetching is (much) less important here. In addition, > normally data is modified in small units, typically 1 object at a > time. But when browsing, it is common to request fetches of 1000s of > objects... That's exactly why there should be the possibility to fetch and get raw rows, not objects. > > However, even in the still-uncommited changes, the framework > > lacks an important feature which is likely to be the next thing > > I'll work on: the ability to broadcast the changes committed on > > one object store to the others (in fact, the notifications are > > already made, but they are not properly handled yet). >=20 > Yes, this would be necessary. Not sure how the broadcast model will > work (may not always know who the end clients are -- data on a client > web page will never know about these changes even if you > broadcast). But, possibly a feasible simple strategy could be that > each store will keep a copy of the original object as it received > it. There is a central and shared object which stores the snapshots when rows are fetched ; you'll find that in class Database. FYI the responsability of broacasting messages is assigned to the NotificationCenter distributed in the NotificationFramework apart from the Modeling. > When committing a modify it compares the original object(s) being > modified to those in the db, if still identical, locks the objects and > commits the changes, otherwise returns an error "object has changed > since". This behaviour could be application specific, i.e. in some > cases you want to go ahead and modify anyway, and in others you want > to do something else. Thus, this could be a store option, with default > being the error. What you describe here seems to be optimistic locking: when updated, the corresponding database row is compared against the snapshots we got at the time the object wes fetched, and if they do not match, this an error (recoverable, in case you want to update anyway, take whatever action depending on the differences and the object and object's type, etc.). What I was talking about is something different. Suppose S1 and S2 are two different object stores, resp. holding objects O1 and o1 that correspond to the very same row in the database (in other words, they are clones living each in its world). Now suppose that o1 is updated and saved back to the database: you may want to make sure the O1 gets the changes as well (of course, you may also request that changes that would have been made on it should be re-applied after they are broadcasted) > In combination with this, other stores may still be notified of > changes, and it is up to them (or their configuration) whether to > reload the modified objects, or to ignore the broadcasts.=20 (ok, i should read more carefully what follows before answering ;) > Otherwise, on each fetch the framework could check if any changes have > taken place (may be expensive) for the concerned objects, and reload > them automatically. This has the advantage of covering those cases > when changes to data in the db are done by some way beyond the > framework, and thus the framework can never be sure to know about it. Yes, this would be expensive, and this is already available to a certain extent. You can ``re-fault''/invalidate your objects so that they are automatically re-fetched next time they are accessed. I understand this is not *exactly* the way you think of it, but that's kind of an implementation and optimization detail, except if you expect objects to be refreshed while not overriding the changes that were already made --this is not possible yet. > > Last and although this is not directly connected to that topic, I'd > > like to make an other point more precise: the framework does not > > implement yet any locking, either optimistic or pessimistic, on DB's > > rows. This means that if other processes modify a row in the DB, an > > ObjectStore will not notice the changes and will possibly override > > them if the corresponding object has been modified in the meantime. >=20 > Yes, but locking objects should be very short lived -- only the time > to commit any changes, as mentioned above. This also makes it much > less important, and only potentially a problem when two processes try > to modify the same object at the same time. Postgresql already locks rows during updates (I cant remember what MySQL does) but anyway I do not plan to support this kind of short-living locks --to my point of view they should be managed by the database itself.=20 What I meant was ``locking policy'' (http://minnow.cc.gatech.edu/squeak= /2634). Pessimistic locking can result in long-standing locks, from the moment an object is *about* to be updated to the moment it is made persistent --this might be an application requirement that an object cannot be edited by more than one person at a time. Well, I hope the overview of what the framework does /not/ support yet does not completely hide what it already does! Regards, -- S=E9bastien. |