> The default event works out to roughly 2,000 samples/second (this has
> changed with oprofile 0.7 though). This assumes a fully loaded system
After sleeping on the problem, I decided that I probably needed to take the 400
Mhz reported and divide it by the count reported to come up with the sample
interval. And this does indeed come out to approximately 2,000 samples per
> I'm not sure that OProfile is really a suitable tool for finding out
> this information though; it's statistical only, and there's no strong
> guarantee that the number of samples your application receives directly
> measures how much time was spent executing your application startup.
The test I am performing brings up the application and then immediately exits,
so the vast majority of the time spent profiling is in the startup.
> Surely using gettimeofday() at particular points in the source would be
> a more accurate and useful approach ?
This is a very big, complex application and I was initially trying to see if I
could get some clues where to start focusing in on with such an approach. I
also wanted to explore what profiling options are available under Linux for this
and future situations.
As it turns out, I found a problem in the code where a count down counter was
improperly initialized to an extremely large value, causing a loop to take much
longer than needed. OProfile did provide me with some good clue about where to
start looking. Also, once I understood the problem the profile data
corroborated the issue very nicely.
Thanks for a great tool, it is going to be fun tracking down other problems with
Icicle Software, Inc.