HI

i am using oprofile-0.9.6 tool.

>See my posting to the oprofile-list from June 24, subject "Re: Error happens when using command "opreport -lc" for a little help on how to handle overflows (lost samples).

I had gone through the following link as for your comment.

http://marc.info/?l=oprofile-list&m=127737576520523&w=2

But i didnot get any info from it.

Is the link i am referring is correct or not?

Regards
koteswararao.

On Mon, Jul 12, 2010 at 8:49 PM, nelakurthi koteswararao <koteswararao18@gmail.com> wrote:
Hi,

Thanks for your reply..


> 1.Move the point at which stats are copied so users don't need to do '--shutdown' (probably move to do_dump in the opcontrol script).


So, Do i need to copy the following line to
do_dump_data() fn present in utils/opcontrol script?

"""  cp -r /dev/oprofile/stats "$SAMPLES_DIR/current"  """"

> 2. Fix a bug in do_dump to allow non-root users to do '--dump' option.
How to fix a bug in do_dump to allow non-root users to do --dump option?

Any Hint regarding this?

Reagrds
koteswararao.


On Sat, Jul 10, 2010 at 5:21 AM, Maynard Johnson <maynardj@us.ibm.com> wrote:
nelakurthi koteswararao wrote:
> HI Maynardj ,
>
> I found out the event_lost_overlow and sample_lost_overflow
> file paths on target under the following path.
>
> /dev/oprofile/stats/event_lost_overflow
> 0
> and
> cat /dev/oprofile/stats/cpu{0,1,2,3}/sample_lost_overflow
> 4301
>
> But, i found the samples under smaple_lost_overflow as 4301.
>
> So
> if samples_lost_overflow contains the samples ,how i got
> no overflow stats available message during the profiling
> that i had give in the prevoius mail?

In future, please post your questions to the oprofile-list.

You're not the first to be confused about the "overflow stats not available" message.  The message simply means that the stats files haven't been copied to a place where opreport can access them (by default, opreport expects the stats to be available in /var/lib/oprofile/samples/current.  The stats are copied there when you do a --shutdown.  There are two things on my to-do list regarding this:

1. Move the point at which stats are copied so users don't need to do '--shutdown' (probably move to do_dump in the opcontrol script).
2. Fix a bug in do_dump to allow non-root users to do '--dump' option.

Now, to your situation . . . So you are seeing samples lost in the per-cpu buffers.  See my posting to the oprofile-list from June 24, subject "Re: Error happens when using command "opreport -lc" for a little help on how to handle overflows (lost samples).

-Maynard

>
> Any hint ?
>
> Regards
> koteswararao.
>
>
> On Fri, Jul 9, 2010 at 8:17 PM, nelakurthi koteswararao <
> koteswararao18@gmail.com> wrote:
>
>> > Dear *maynardj*,
>> >
>> >
>> >    1. The change that was made in oprofile tool in following commit, with
>> >    comment:
>> >
>> >    http://oprofile.cvs.sourceforge.net/viewvc/oprofile/oprofile/libpp/profile_spec.cpp?r1=1.37&r2=1.38at Fri Jul 10 20:03:39 2009, with comment 'Handle bad samples from kernel
>> >    due to overflows'.
>> >
>> >
>> >    1. This change was made to: 'Print a warning message if we detect any
>> >    sample buffer overflows occurred in the kernel driver.'.
>> >
>> >
>> >    1. In
>> >    oprofile-0.9.6/libpp/profile_spec.cpp:warn_if_kern_buffs_overflow(),
>> >    it checks for:
>> >    - <oprofile_sample_dir> + "stats/event_lost_overflow"
>> >       - <oprofile_sample_dir> + "stats/…/sample_lost_overflow"
>> >       But even if the open() fails from some reason, current oprofile
>> >       code handles it just warning and prints the above message.
>> >
>> > I am getting this error while i am testing 2.6.29 kernel with
>> > oprofile-0.9.6 version.
>> > on ARM target(naviengine + SMP) .
>> >
>> > I run the following oprofile commands.
>> >
>> > {{{
>> >
>> > /tmp # opcontrol --start --vmlinux=/boot/vmlinux --image=/tmp/array
>> > Using default event: CPU_CYCLES:100000:0:1:1
>> > Using 2.6+ OProfile kernel interface.
>> > Reading module info.
>> > Using log file /var/lib/oprofile/samples/oprofiled.log
>> > Daemon started.
>> > Profiler running.
>> > /tmp # gcc -g array
>> > array    array.c
>> > /tmp # gcc -g array
>> > array    array.c
>> > /tmp # gcc -g array.c -o array
>> > /tmp # ./array
>> > /tmp # opcontrol --dump
>> > /tmp # opreport
>> > *Overflow stats not available
>> > *CPU: ARM MPCore, speed 0 MHz (estimated)
>> > Counted CPU_CYCLES events (clock cycles counter) with a unit mask of 0x00
>> > (No unit mask) count 100000
>> > CPU_CYCLES:100000|
>> >   samples|      %|
>> > ------------------
>> >     10270 100.000 array
>> > }}}
>> >
>> > i Observed the ""*overflow stats  not available*"" message during
>> > profiling of array application.
>> > I understand it as debug message that indicates no overflow samples are
>> > observed.
>> >
>> > {{{
>> > /tmp # oparchive -o /var/lib/oprofile/oprofiled_log/ ./array
>> > Overflow stats not available
>> > Session overflow stats not available
>> >
>> > The above command collected the samples to /var/lib/oprofile/oprofiled.log/
>> > directory.
>> > But *session overlows stats not available *is generated as warning
>> > message.
>> > But even if the open() fails from some reason, current oprofile
>> > code handles it just warning and prints the above message.  by executing
>> > the code present in
>> > oprofile-0.9.6/libpp/profile_spec.cpp:warn_if_kern_buffs_overflow(),
>> > function.
>> >
>> >
>> > Where i need to search to find out
>> >
>> >    - <oprofile_sample_dir> + "stats/event_lost_overflow"
>> >    - <oprofile_sample_dir> + "stats/…/sample_lost_overflow"
>> >
>> > the above files?
>> >
>> > I didnot find event_lost_overflow and sample_lost_overflow files under
>> > /var/lib/oprofile/oprofiled_log/
>> > directory which is my current session from this example commands.
>> >
>> > But i am able to generate the samples output as follows.
>> >
>> > /tmp # opreport archive:/var/lib/oprofile/oprofiled_log/
>> > Overflow stats not available
>> > CPU: ARM MPCore, speed 0 MHz (estimated)
>> > Counted CPU_CYCLES events (clock cycles counter) with a unit mask of 0x00
>> > (No unit mask) count 100000
>> > CPU_CYCLES:100000|
>> >   samples|      %|
>> > ------------------
>> >     10270 100.000 array
>> > /tmp #
>> >
>> > }}}
>> >
>> >
>> > I think is there is any sanity check in 2.6.29. or prior version kernels to
>> > avoid
>> > this debug message (or) to justify these bug messages?
>> >
>> > Any hint will greatly help me.
>> >
>> > Thanks for allowing me in sending this mail
>> >
>> > Regards
>> > koteswararao.
>> >
>> >
>> >
>> >