On 2/19/07, Allen Bierbaum <email@example.com> wrote:
> I thought about it and I don't know. May be you are doing something wrong.
That is always possible although nothing has really changed in the
script in the time since it used to run fast.
> Python-Ogre project is not a smallest one, but it takes only 5 minutes( without
> cache and files ) to generate code. During code generation they also parse
> Ogre source tree to extract documentation. Can you show your script?
The generation script is here:
I have added some additional timing analysis that shows the vast
majority (>90%)of the time being spent in two places.
1) line 587, the call to the module builder initialization where all the
code is parsed
2) line 1040, the call to build_code_creator where all the code creators
are built up and everything is setup to run the code generation.
Neither of these is a surprise since this is where py++ does all of it's
I am still collecting more information that should be able to help track
down why the full_name and get_name methods are being called so many
millions of times. (calling full_name 50 million times for a single run
just seems excessive.) I am using stack monitoring technique I used
when I helped out with the performance last time. I have added code to
a local version of py++ that is not only counting the number of times
full_name is called but also keeps a dictionary that contains a list of
every call path (stack) into the code and how many times full_name has
been called along each path. This should be able to show us if there
are some code paths that end up making the majority of the calls and
could be optimized.
Collecting this information can take a long time though. I have already
had the instrumented code running for about 20 hours and I expect it
will run for another 24 hours before it is complete. Once the run is
done I will post the resulting information so everyone can take a look
and present any ideas they have for helping out performance.