Re: [Modeling-users] Modeling performance for large number of objects?
Status: Abandoned
Brought to you by:
sbigaret
From: John L. <jl...@gm...> - 2004-12-20 13:44:10
|
On Mon, 20 Dec 2004 14:21:40 +0100, Wolfgang Keller <wol...@gm...> wrote: > Hello, >=20 > and thanks for your reply. >=20 > > Usually, the penalty does not interfere with the job at hand. >=20 > Oops, why? >=20 > Obviously, when all objects get read into memory at startup of the > application server, and written back only at night, then... because this (reading in all the objects, modifying them all, and saving them all) is not the usual use case. Usually you might *display* all the objects (where rawRows comes in handy), and then the user selects one of these objects to actually modify (so you fault the raw object into a real one, work on it, and saveChanges). You still have a 20x penalty, but it's much less then a second in this use case. > > One reference point you might find useful is that when I loading 3000 > > objects from a database, modifying them, and then saving the changes, > > on a 700MHz p3 notebook, the loading took about 40 seconds, and the > > saving, 200. That's 20 times what a direct sql script would've taken. >=20 > This gives me an idea, thanks. A multiplier of 20 is quite significant > imho. >=20 > > Of course, in both cases, writing the sql script would've taken a > > *lot* longer than the difference in run time, for me. However, it's > > obvious that there are cases where the runtime difference overpowers > > the developer time difference... >=20 > I was wondering whether Modeling would be suitable as a persistence layer > for a Python application that needs to process (create - transform - stor= e) > rather important amounts of data. >=20 > The question is for me whether Modeling tries to and/or whether there wou= ld > be some other way to cut the hourglass-display-time to the unavoidable > minimum (dependend on the database) by some kind of smart caching of > objects or by maintaining some kind of pool of pre-created object > instances. Modeling does cache the objects, and only saves those objects that have effectively changed, so depending on your actual use cases you might be surprised at how well it works. The loading, modifying and saving of all the objects is pretty much the worse case; Modeling isn't meant (AFAICT) for that kind of batch processing. It certainly is convenient, though :) Of course, maybe S=E9bastien has a trick up his sleeve as to how one could go about using Modeling for batch processing... --=20 John Lenton (jl...@gm...) -- Random fortune: bash: fortune: command not found |