Re: [Pyunit-interest] Timing
Brought to you by:
purcell
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-06 13:14:54
|
Michal Wallace writes: > Have you actually tried this? The profiler is usually blazingly fast > has almost no overhead whatsoever (there's hooks for it directly in > the VM). I run it as part of my cgi wrapper "when I'm developing, and > it's quite comfortable. Even if you had a ton of test cases, I would > think it should take a half a second or so at the worst. Steve Purcell writes: > I based my comment on some experiences I had with coverage > measurement; there, the callback overhead is much higher because a > callback is made for each executed line, rather than for each > function, as is the case when profiling. > > So on reflection, "significantly slower" is probably significantly > overstated. :-) I'd say that profiling is less of a performance hit than coverage analysis, but it still sucks cycles like a mad dog. I say this based on trying to improve performance in Grail, which requires a lot of work to get a page displayed. The profiler provides a number of classes that can be used to collect information. There's even a HotProfile class which is touted as relatively fast, but the information it collects is simpler (which is why it is faster). Unfortunately, the datastructure it creates is not compatible with the analysis tools in the pstat module. ;( It may be that some post-processing on the data can be used to convert it to the format used by pstat. Another possibility is to write the data collector in C, and then push the data into Python structures after the measurement is complete. Is anyone working on ideas like these? A *lot* of people can benefit from the work, and anyone serious about performance would want to use the profiler if it didn't hurt as bad as it does. Also, is anyone working on coverage tools that can be used with PyUnit, and can be contributed to the project? -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |