I’ve used the callgraph profiling with a depth of 1 to great success.

I’m trying to use callgraph profiling with a larger number, but on a long running program (hours, but I limited it to 2.5 hours,

with oprofile –start; sleep 2.5h; oprofile –shutdown).

I told oprofile to record to a depth of 5.

The linux machine has crashed.


I haven’t had the machine back up yet to look at, but is it possible the sampling

generated way too much data?  I used the standard sampling frequency and event,

and this is on a 24-processor machine (12 cores, hyperthreaded).

Also, I think I had quite a bit of disk available, so I’m not sure that it ran out of disk,

but if it’s saving raw stacks or sequences of positions, I can imagine that might get large.


Might there be a way to make this more space-efficient?

If one doesn’t record at the lowest level, but only the symbols / routines active on the stack (not the PC location in the routine),

then that might be useful yet smaller.

Also, if there’s some tree structure one could save the counts in,

then the callgraph should form a tree of some type, and counts could be stored on the leaves.


Sorry if this is too much of a newbie question, I’m impressed with the tool and want to help out,

but don’t have the spare time to contribute more directly.