Re: [Modeling-users] Modeling performance for large number of objects?
Status: Abandoned
Brought to you by:
sbigaret
From: John L. <jl...@gm...> - 2004-12-20 13:05:32
|
On Mon, 20 Dec 2004 11:43:07 +0100, Wolfgang Keller <wol...@gm...> wrote: > > given the significant penalty for the creation of Python objects indicated > by most benchmarks I have seen so far, I wonder how and how well Modeling > deals with this issue...? Usually, the penalty does not interfere with the job at hand. However, when you actually need to manipulate a large number of objects, the Modeling overhead can be quite noticeable; when said manipulation is only for querying, using rawRows avoids most of the creation overhead, but when you have to modify the objects in question, you might find yourself waiting quite a long time for saveChanges to complete. One reference point you might find useful is that when I loading 3000 objects from a database, modifying them, and then saving the changes, on a 700MHz p3 notebook, the loading took about 40 seconds, and the saving, 200. That's 20 times what a direct sql script would've taken. On the other hand, loading 100000 objects using rawRows takes about 20 seconds on this same machine. That's 4 times what the straight sql would've taken. Of course, in both cases, writing the sql script would've taken a *lot* longer than the difference in run time, for me. However, it's obvious that there are cases where the runtime difference overpowers the developer time difference... -- John Lenton (jl...@gm...) -- Random fortune: bash: fortune: command not found |