From: Tripp L. <tl...@pe...> - 2002-03-02 20:28:44
|
On 1 Mar 2002, Jason Hildebrand wrote: > Performance-wise, MiddleKit may be a bit slower (in terms of the number > of queries) than doing your own SQL, but it gives you the data in > ready-to-use objects, and saves a lot of development time. My anecdotal experience is that MiddleKit is currently wicked slow, because it's largely unoptimized, and does not internally make much use of the database engine's available performance. Please note that this is not a slam on MiddleKit's potential :) However, as Chuck has pointed out from time to time, it's still alpha code. We're using it in a production app, and I get cut on its rough edges every day or three, but it's still "good enough", and saving us enough roughing-in time that it's currently worth the pain incurred. I'm hesitant to make any predictions about any potential contribution I'll make to improving the state of things, since I have a history of such grand designs being preempted by job changes and the like, but... I'm about to get hip-deep in a bunch of work on MK that should improve performance -and- stability under high-concurrency. And I have a design on the drawing board for adding implicit version history to the store. -That- is going to rock hard. Anyway, back to the original point... What Jason said about the nature of the clauses you would typically pass to the fetchObject methods is dead-on. They're generic, and you really shouldn't find yourself using sub-selects and the like too often, as long as you're normally querying based on the state of the objects you're fetching, not "related" objects. If you get into querying across relations, you're writing yourself into a store-specific situation, anyway, because it's up to each store how it actually maps objects into tables. Granted, all SQLObjectStore derivatives are going to share some basic structure, so you may get away safely. But your work is going to rely on intimate knowledge of how the particular store you're using maps inter-object relations. If you're terribly worried about portability, you could always collect your various clauses in a central dictionary, and then use them by name: all_clauses = { "nameExact": "where name = '%(soughtName)s'" "nameLike": "where name like '%%%(soughtName)s%%'" ... } ... soughtName = 'tlilley' store.fetchObjectsOfClass(FooBar, clauses=all_clauses["nameClause"] % locals()) If you load such a dictionary from a disk-based config file, you would have a central place to make all of your changes when you migrate from one DB to the other. If you're -really- working on a shload of queries, you could even probably convert them programatically, as long as you used a well-defined set of basic query patterns. Ultimately, I'd like to implement OQL or some other rich, object-oriented query language within MK. That way, the OQL parser could write fast SQL queries, and you'd never have to worry about portability. Even better, when I write an MK store that backs up to an ODBMS, or a semnet like Framer-D <http://framerd.org/>, I won't have to change all of my queries, because they'll already be "objectified". One day. Really. Soon. Well before the heat-death of the universe. |