Em Thu, Dec 30, 2010 at 10:03:44AM -0600, Maynard Johnson escreveu:
> Simon Ruggier wrote:
> > Sorry, I should have proofread the entire thing before sending it.
> > On Wed, Dec 29, 2010 at 11:16 PM, Simon Ruggier <simon80@...> wrote:
> >> Earlier this year, I was trying to debug a performance problem in a
> >> video playback application using oprofile. The software was capable of
> >> playing at acceptable framerates most of the time, but there were
> >> periods where it would freeze for a few seconds at a time. I tried to
> >> diagnose this using callgraph support, etc., but it was hard to
> >> identify the problem. I eventually figured it out by collecting
> >> samples during the problem time period, and then using opreport's
> >> differential output to compare with samples collected during normal
> >> playback. This allowed me to find that the culprit was a somewhat
> >> unrelated application that was occasionally starving the video
> >> playback out of CPU time. This was not obvious without trying to focus
> >> on the specific period of time that the problem occurred in, or even
> >> without using the differential output, because culprit's samples were
> >> blended in with everything else.
> >> It would be nice if it were possible to solve this use case (surely
> >> I'm not the analysis using just one session of sample collection. This
> > This sentence should read:
> > It would be nice if it were possible to solve this use case using just
> > one session of sample collection (surely I'm not the only person with
> > this use case).
> >> leads me to think that it would be useful to store timing information
> >> in the samples. Storing them on a per-sample basis would obviously be
> >> overkill, but it seems to me like storing them with each context
> >> switch would provide very fine granularity without excessively
> >> bloating the size of sample data. From reading the internals document,
> >> it seems like it wouldn't be necessary to modify any
> >> architecture-specific code to make this work, but the sample storage
> >> format in the event buffer and maybe the cpu buffers would need an
> >> extra field to be added to the events to store the time.
> >> Obviously this information wouldn't be helpful without a good
> >> reporting format, but I think it could be very helpful if combined
> >> with a GUI that performs either histogramming or windowing () with
> >> controllable granularity, and displays symbols proportionately in a
> >> format reminiscent of the line chart in , but with no intersecting
> >> lines, of course. The important purpose of the visual display would be
> >> to highlight variations over time so that the user can focus on
> >> specific time intervals. The actual symbol names could be shown in a
> >> tooltip, and maybe the user could generate textual reports as with
> >> opreport, but for a graphically selected interval or intervals of
> >> interest. I think a tool like this would be useful for the use case I
> >> describe above, but also useful for distinguishing between separate
> >> steps in algorithms that perform multiple CPU intensive tasks in
> >> order.
> >> I don't expect anyone to step up and implement all of this, but I may
> >> not have time to implement it in the near future because of all of the
> >> conflicting obligations I have from school. Please let me know if
> >> there's some existing way to deal with these use cases that I've
> >> missed, or if there is disagreement about changing the way samples are
> >> collected in the kernel. Any other discussion of this idea is also
> >> welcome.
> Simon, you are probably aware of the new profiling tool called 'perf'
> that's been available with kernel source since around version 2.6.32.
> The kernel interface for that tool (called "Performance Events
> Subsystem for Linux", aka "perf_events") provides a pretty wide range
> of data formats. Checking the source code this morning, I do see a
> timestamp value being part of some of those formats. The 'perf' tool
> is actually a collection of tools, where 'perf' is the frontend. It
> appears that some of the tools make use of the timestamp. I tried a
> few usage experiments (with 'trace' and 'timechart'), but couldn't
> really get them to work. I'm sure it's probably user error. But
> without decent user documentation (the man pages were no help), I
> couldn't get any further. It would take some deep code dives, which I
> don't have time for.
> But my gut tells me that it would be more productive and efficient to
> leverage perf_events for the kind of functionality you're looking for
> rather than trying to enhance oprofile. For one thing, adding new
> functionality to the oprofile kernel driver is a hard sell these days
> because of the desire on the part of the kernel community to deprecate
> oprofile (at least the kernel part) in favor of perf_events.
Not all events had timestamps, that was an oversight when the perf
events were first designed, but now one can ask for timestamps for all
events by setting the perf_event_attr.sample_id_all bit in >= 2.6.38
kernels, please take a look at the code in the tip/perf/core branch of
The tools now use it when available to sort events when post processing
Documentation is indeed lacking, I can point tho to some files and URLs:
In the kernel sources:
Re-reading Simon's problem statement, it seems about what David is
trying to achieve, i.e. being able to correlate app specific logs with
performance events, using the same kind of clock that the app is using.
Is where I'm working on this, don't hold your breath, but I guess it
will get close to what you want, please see the linux-perf-users mailing
list archives to see the discussion about David's "timehist" utility
that sparked this discussion:
You can start from here: