From: John L. <le...@mo...> - 2003-10-14 23:55:58
|
On Tue, Oct 14, 2003 at 04:28:55PM -0700, John Levon wrote: > Perfmon support. OK, this is the first round of perfmon support for ia64/2.6. It's still a little flaky (signal races) but basically works OK. Currently we work against pfmlib v3 (not yet released) but I think we have to drop that dependency - instead of separating out unit mask vs. event, pfmlib has a big list of all possible combinations. It seems broken for the bitmask-style events anyway. Also pfmlib only seems to support user vs. kernel counting across all the counters; I forget if ia64 has per-counter support for that, but I think that it does. It shouldn't be too much work to drop the dependency - basically we have to encode the value ourselves like the other arches, but in userspace. We already have the counter allocation stuff anyway. We should probably make the 2.4 daemon take the --events parameter too and share the code. I managed to hang the test box once on an opcontrol --shutdown (dead to the world); not sure what that was yet. regards john |
From: Stephane E. <er...@hp...> - 2003-10-15 00:42:51
|
John, On Wed, Oct 15, 2003 at 12:55:51AM +0100, John Levon wrote: > On Tue, Oct 14, 2003 at 04:28:55PM -0700, John Levon wrote: > > > Currently we work against pfmlib v3 (not yet released) but I think we > have to drop that dependency - instead of separating out unit mask vs. > event, pfmlib has a big list of all possible combinations. It seems > broken for the bitmask-style events anyway. > Using a big list is more convenient, and the user can quickly identify an event just by looking at the name. But again, you are free to use whather support library you want. > Also pfmlib only seems to support user vs. kernel counting across all > the counters; I forget if ia64 has per-counter support for that, but I > think that it does. > It is possible to have per-event priv levels with libpfm. There is a global setting used when nothing was specified for a specific event: pfmlib_input_param_t.pfp_events[i].plm > It shouldn't be too much work to drop the dependency - basically we have > to encode the value ourselves like the other arches, but in userspace. > We already have the counter allocation stuff anyway. Keep in mind that Itanium 2 processors have about 400 events. Of course, not all of them may matter for the kind of measurements you are doing with your user level tools. In addition, there are some very complex sets of constraints between events that you would have to take care of. -- -Stephane |
From: John L. <le...@mo...> - 2003-10-15 01:03:39
|
On Tue, Oct 14, 2003 at 05:38:16PM -0700, Stephane Eranian wrote: > > Currently we work against pfmlib v3 (not yet released) but I think we > > have to drop that dependency - instead of separating out unit mask vs. > > event, pfmlib has a big list of all possible combinations. It seems > > broken for the bitmask-style events anyway. > > > Using a big list is more convenient, and the user can quickly identify > an event just by looking at the name. But again, you are free to use > whather support library you want. Well, currently with pfmlib, it's impossible to specify the bitmask events properly (such as ALAT_CAPACITY_MISS), though it's been a while since I browsed the PDFs. > It is possible to have per-event priv levels with libpfm. There is a global > setting used when nothing was specified for a specific event: Ah, cheers. > Keep in mind that Itanium 2 processors have about 400 events. Of course, > not all of them may matter for the kind of measurements you are doing > with your user level tools. In addition, there are some very complex sets > of constraints between events that you would have to take care of. Yes, there are a lot of constraints we don't handle yet. But we really have no choice in dropping pfmlib - we simply cannot duplicate all this info twice (once for 2.4 and once for 2.6), whilst breaking the bit mask unit masks. Stephane, I'm having some strange problems with perfmon - in particular, sometimes one of the CPUs does not get initialised properly it seems: no overflow interrupts occur. The children are affined to each CPU correctly, and the other CPU is fine. These children are both set up identically with CPU_CYCLES and IA64_INST_RETIRED on counters 4 and 5, respectively. Ideas ? regards john -- Khendon's Law: If the same point is made twice by the same person, the thread is over. |
From: Stephane E. <er...@hp...> - 2003-10-15 02:08:25
|
John, On Wed, Oct 15, 2003 at 02:03:34AM +0100, John Levon wrote: > On Tue, Oct 14, 2003 at 05:38:16PM -0700, Stephane Eranian wrote: > > > > Currently we work against pfmlib v3 (not yet released) but I think we > > > have to drop that dependency - instead of separating out unit mask vs. > > > event, pfmlib has a big list of all possible combinations. It seems > > > broken for the bitmask-style events anyway. > > > > > Using a big list is more convenient, and the user can quickly identify > > an event just by looking at the name. But again, you are free to use > > whather support library you want. > > Well, currently with pfmlib, it's impossible to specify the bitmask > events properly (such as ALAT_CAPACITY_MISS), though it's been a while > since I browsed the PDFs. > I don't know what you mean by "properly". The library gives access to all the events are their "subevents", i.e. unit mask. Take the example of ALAT_CAPACITY_MISS, which is the main event. It has 3 unit masks possible: INT, FP, ALL. Therefore the library provides 3 "events": ALAT_CAPACITY_MISS_ALL ALAT_CAPACITY_MISS_FP ALAT_CAPACITY_MISS_INT The unit mask is automatically encoded and its transparent to the user who does not need to know about the fact that there are events with subevents. > > > Keep in mind that Itanium 2 processors have about 400 events. Of course, > > not all of them may matter for the kind of measurements you are doing > > with your user level tools. In addition, there are some very complex sets > > of constraints between events that you would have to take care of. > > Yes, there are a lot of constraints we don't handle yet. But we really > have no choice in dropping pfmlib - we simply cannot duplicate all this > info twice (once for 2.4 and once for 2.6), whilst breaking the bit mask > unit masks. > Well, given you already have some supprot for 2.4, it is probably best to keep it for 2.6. You just need to keep upgrading each time there is a new PMU core. > Stephane, I'm having some strange problems with perfmon - in particular, > sometimes one of the CPUs does not get initialised properly it seems: > no overflow interrupts occur. The children are affined to each CPU > correctly, and the other CPU is fine. > The typical sequence in perfmon-2 to setup system-wide SMP is to prepare all the events, PMC/PMD and the context structure. Then you fork off one task per CPU and you pin in on that CPU. Then you create the context, program PMC, PMDs, start, measure for a while, stop, and then read. If using a separate process (via fork() as opposed to pthread_create()), then I would typically do: cpumask = 1UL << my_cpu; sched_setaffinity(getpid(), sizeof(cpumask), cpumask); perfmonctl(0, PFM_CREATE_CONTEXT, ctx, 1); fd = ctx->ctx_fd; perfmonctl(fd, PFM_WRITE_PMCS, pc, pc_count); perfmonctl(fd, PFM_WRITE_PMDS, pd, pd_count); perfmonctl(dd, PFM_START, NULL, 0); I think that is pretty much what you have. You can toggle perfmon-2 debugging by using: to turn on: # echo 1 >/proc/sys/kernel/perfmon/debug to turn off: # echo 1 >/proc/sys/kernel/perfmon/debug To debug overflow handlers, in addition do: # echo 1 >/proc/sys/kernel/perfmon/debug_ovfl What is the error message returned by the perfmonctl() call? -- -Stephane |
From: John L. <le...@mo...> - 2003-10-15 02:33:44
|
On Tue, Oct 14, 2003 at 07:05:29PM -0700, Stephane Eranian wrote: > I don't know what you mean by "properly". The library gives access to all > the events are their "subevents", i.e. unit mask. Take the example of > ALAT_CAPACITY_MISS, which is the main event. It has 3 unit masks possible: > INT, FP, ALL. Therefore the library provides 3 "events": > ALAT_CAPACITY_MISS_ALL > ALAT_CAPACITY_MISS_FP > ALAT_CAPACITY_MISS_INT For example, look at EXTERN_DP_PINS_0_TO_3 event. libpfm has : { "EXTERN_DP_PINS_0_TO_3_PIN0", {0x1009e}, 0xf0, 1, {0xf00000}, "DP Pins 0-3 Asserted -- include pin0 assertion"}, #define PME_ITA2_EXTERN_DP_PINS_0_TO_3_PIN1 220 { "EXTERN_DP_PINS_0_TO_3_PIN1", {0x2009e}, 0xf0, 1, {0xf00000}, "DP Pins 0-3 Asserted -- include pin1 assertion"}, #define PME_ITA2_EXTERN_DP_PINS_0_TO_3_PIN2 221 { "EXTERN_DP_PINS_0_TO_3_PIN2", {0x4009e}, 0xf0, 1, {0xf00000}, "DP Pins 0-3 Asserted -- include pin2 assertion"}, #define PME_ITA2_EXTERN_DP_PINS_0_TO_3_PIN3 222 { "EXTERN_DP_PINS_0_TO_3_PIN3", {0x8009e}, 0xf0, 1, {0xf00000}, "DP Pins 0-3 Asserted -- include pin3 assertion"}, i.e. one event for each bit on. But the unit mask is a *bitmask*, that is, any combination of the PIN0-PIN3 bits can be set. The OProfile equivalent has : event:0x9e counters:0,1,2,3 um:extern_dp_pins_0_to_3 minimum:5000 name:EXTERN_DP_PINS_0_TO_3 : DP pins 0-3 asserted then : # CPU_IA64_2 Table 11-63 name:extern_dp_pins_0_to_3 type:bitmask default:0xf 0x1 PIN0 0x2 PIN1 0x4 PIN2 0x8 PIN3 0xf ALL So the user can specify a mask of PIN1+PIN3. This doeosn't seem possible with libpfm ? > I think that is pretty much what you have. Yes, it is. The below is the result of what you described, essentially. Note this version is doing the setup without pfmlib, but the same happens in either case. I've verified the children are correctly affined to different CPUs. > What is the error message returned by the perfmonctl() call? There is no error message, it all apparently succeeds but there's no interrupts generated. I append the log when starting up oprofile below on one of the times where this failure mode happened. I can't include the overflow log, since one of the CPUs is working, and it would kill the machine :) In this case it's CPU1 that has trouble. I've also given you /proc/perfmon (note this includes stats from previous oprofile runs) regards john Oct 14 21:00:47 easement kernel: sys_perfmonctl.4728: CPU1 [14494] cmd=pfm_context_create idx=8 valid=1 narg=0x1 argsz=128 count=1 Oct 14 21:00:47 easement kernel: pfm_ctx_getsize.2539: CPU1 [14494] arg_size=0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4782: CPU1 [14494] [14494] restart_args sz=128 xtra_sz=0 Oct 14 21:00:47 easement kernel: pfm_context_alloc.854: CPU1 [14494] alloc ctx @e00000027e7dc000 Oct 14 21:00:47 easement kernel: pfm_alloc_fd.2143: CPU1 [14494] new inode ino=19977 @e0000040fdbfa680 Oct 14 21:00:47 easement kernel: pfm_setup_buffer_fmt.2436: CPU1 [14494] [14494] after validate(0x2,0,0000000000000000)=0 Oct 14 21:00:47 easement kernel: pfm_context_create.2716: CPU1 [14494] ctx=e00000027e7dc000 flags=0x2 system=1 notify_block=0 excl_idle=0 unsecure=0 no_msg=0 ctx_fd=4 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[1]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[2]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[3]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[4]=0x800000 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[5]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[6]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[7]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[8]=0xffffffff3fffffff Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] sys_perfmonctl.4728: CPU0 [14492] cmd=pfm_context_create idx=8 valid=1 narg=0x1 argsz=128 count=1 Oct 14 21:00:47 easement kernel: pfm_ctx_getsize.2539: CPU0 [14492] arg_size=0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4782: CPU0 [14492] [14492] restart_args sz=128 xtra_sz=0 Oct 14 21:00:47 easement kernel: pfm_context_alloc.854: CPU0 [14492] alloc ctx @e000000272172000 Oct 14 21:00:47 easement kernel: pfm_alloc_fd.2143: CPU0 [14492] new inode ino=19978 @e0000040fdbf8600 Oct 14 21:00:47 easement kernel: pfm_setup_buffer_fmt.2436: CPU0 [14492] [14492] after validate(0x2,0,0000000000000000)=0 Oct 14 21:00:47 easement kernel: pfm_context_create.2716: CPU0 [14492] ctx=e000000272172000 flags=0x2 system=1 notify_block=0 excl_idle=0 unsecure=0 no_msg=0 ctx_fd=4 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[1]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[2]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[3]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[4]=0x800000 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[5]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[6]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[7]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[8]=0xffffffff3fffffff Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[9]=0xffffffff3ffffffc Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[10]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[11]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[12]=0x0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[13]=0x2078fefefefe Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[14]=0xdb60db60db60db6 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2476: CPU0 [14492] pmc[15]=0xfffffff0 Oct 14 21:00:47 easement kernel: pfm_reset_pmu_state.2512: CPU0 [14492] <4> all_pmcs=0xfffe all_pmds=0x3ffff Oct 14 21:00:47 easement kernel: sys_perfmonctl.4835: CPU0 [14492] cmd=pfm_context_create ret=0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4728: CPU0 [14492] cmd=pfm_write_pmcs idx=1 valid=1 narg=0xffffffff argsz=160 count=1 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlock_irq_save ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlocked ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: pfm_check_task_state.4648: CPU0 [14492] context 4 no task, state=1 Oct 14 21:00:47 easement kernel: pfm_mck_reserved.92: CPU0 [14492] pmc[4]=0x801269, mask=0xfffff7f, reset=0x800000, val=0x801269 Oct 14 21:00:47 easement kernel: pfm_write_pmcs.3063: CPU0 [14492] pmc[4]=0x801269 loaded=0 access_pmu=0 all_pmcs=0xfffe used_pmds=0x10 eventid=0 smpl_pmds=0x0 reset_pmds=0x0 reloads_pmcs=0x0 used_monitors=0x10 ovfl_regs=0x0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4824: CPU0 [14492] [14492] context unlocked Oct 14 21:00:47 easement kernel: sys_perfmonctl.4825: CPU0 [14492] spinlock_irq_restore ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4835: CPU0 [14492] cmd=pfm_write_pmcs ret=0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4728: CPU0 [14492] cmd=pfm_write_pmds idx=2 valid=1 narg=0xffffffff argsz=160 count=1 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlock_irq_save ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlocked ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: pfm_check_task_state.4648: CPU0 [14492] context 4 no task, state=1 Oct 14 21:00:47 easement kernel: pfm_write_pmds.3246: CPU0 [14492] pmd[4]=0xfffffffffff92230 loaded=0 access_pmu=0, hw_value=0xfffffffffff92230 ctx_pmd=0xfffffffffff92230 short_reset=0xfffffffffff92230 long_reset=0x0 notify=N used_pmds=0x10 reset_pmds=0x0 reload_pmds=0x0 all_pmds=0x3ffff ovfl_regs=0x0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4824: CPU0 [14492] [14492] context unlocked Oct 14 21:00:47 easement kernel: sys_perfmonctl.4825: CPU0 [14492] spinlock_irq_restore ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4835: CPU0 [14492] cmd=pfm_write_pmds ret=0 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4728: CPU0 [14492] cmd=pfm_context_load idx=16 valid=1 narg=0x1 argsz=32 count=1 Oct 14 21:00:47 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlock_irq_save ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlocked ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: pfm_check_task_state.4648: CPU0 [14492] context 4 no task, state=1 Oct 14 21:00:47 easement kernel: pfm_context_load.4176: CPU0 [14492] load_pid [14492] Oct 14 21:00:47 easement kernel: pfm_bad_permissions.2376: CPU0 [14492] [14492] cur: uid=0 gid=0 task: euid=0 suid=0 uid=0 egid=0 sgid=0 Oct 14 21:00:47 easement kernel: pfm_reserve_session.1314: CPU0 [14492] in sys_sessions=0 task_sessions=0 dbregs=0 syswide=1 cpu=0 Oct 14 21:00:47 easement kernel: pfm_reserve_session.1328: CPU0 [14492] reserving system wide session on CPU0 currently on CPU0 Oct 14 21:00:47 easement kernel: pfm_reserve_session.1344: CPU0 [14492] out sys_sessions=1 task_sessions=0 dbregs=0 syswide=1 cpu=0 Oct 14 21:00:47 easement kernel: pfm_context_load.4249: CPU0 [14492] [14492] before cmpxchg() old_ctx=0000000000000000 new_ctx=e000000272172000 Oct 14 21:00:47 easement kernel: pfm_reset_msgq.791: CPU0 [14492] ctx=e000000272172000 msgq reset Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1070: CPU0 [14492] mask=0x3ffff Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[0]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[1]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[2]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[3]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[4]=0x7ffffff92230 soft_val=0xffff800000000000 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[5]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[6]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[7]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[8]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[9]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[10]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[11]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[12]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[13]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[14]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[15]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[16]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmds.1091: CPU0 [14492] pmd[17]=0x0 soft_val=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1105: CPU0 [14492] mask=0xfffe Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[0]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[1]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[2]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[3]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[4]=0x801269 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[5]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[6]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[7]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[8]=0xffffffff3fffffff Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[9]=0xffffffff3ffffffc Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[10]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[11]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[12]=0x0 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[13]=0x2078fefefefe Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[14]=0xdb60db60db60db6 Oct 14 21:00:47 easement kernel: pfm_copy_pmcs.1110: CPU0 [14492] pmc[15]=0xfffffff0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1048: CPU0 [14492] mask=0x3ffff Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[0]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[1]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[2]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[3]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[4]=0x7ffffff92230 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[5]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[6]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[7]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[8]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[9]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[10]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[11]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[12]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[13]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[14]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[15]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[16]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmds.1053: CPU0 [14492] pmd[17]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1121: CPU0 [14492] mask=0xfffe Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[1]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[2]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[3]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[4]=0x801269 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[5]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[6]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[7]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[8]=0xffffffff3fffffff Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[9]=0xffffffff3ffffffc Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[10]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[11]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[12]=0x0 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[13]=0x2078fefefefe Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[14]=0xdb60db60db60db6 Oct 14 21:00:47 easement kernel: pfm_restore_pmcs.1125: CPU0 [14492] pmc[15]=0xfffffff0 Oct 14 21:00:47 easement kernel: pfm_context_load.4331: CPU0 [14492] context loaded on PMU for [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4824: CPU0 [14492] [14492] context unlocked Oct 14 21:00:47 easement kernel: sys_perfmonctl.4825: CPU0 [14492] spinlock_irq_restore ctx e000000272172000 by [14492] Oct 14 21:00:47 easement kernel: sys_perfmonctl.4835: CPU0 [14492] cmd=pfm_context_load ret=0 Oct 14 21:00:48 easement kernel: pmc[9]=0xffffffff3ffffffc Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[10]=0x0 Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[11]=0x0 Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[12]=0x0 Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[13]=0x2078fefefefe Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] pmc[14]=0xdb60db60db60db6 Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2476: CPU1 [14494] sys_perfmonctl.4728: CPU0 [14492] cmd=pfm_start idx=5 valid=1 narg=0x0 argsz=0 count=0 Oct 14 21:00:48 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlock_irq_save ctx e000000272172000 by [14492] Oct 14 21:00:48 easement kernel: sys_perfmonctl.4809: CPU0 [14492] spinlocked ctx e000000272172000 by [14492] Oct 14 21:00:48 easement kernel: pfm_check_task_state.4656: CPU0 [14492] context 4 state=2 [14492] task_state=0 must_stop=8 Oct 14 21:00:48 easement kernel: sys_perfmonctl.4824: CPU0 [14492] [14492] context unlocked Oct 14 21:00:48 easement kernel: sys_perfmonctl.4825: CPU0 [14492] spinlock_irq_restore ctx e000000272172000 by [14492] Oct 14 21:00:48 easement kernel: sys_perfmonctl.4835: CPU0 [14492] cmd=pfm_start ret=0 Oct 14 21:00:48 easement kernel: pmc[15]=0xfffffff0 Oct 14 21:00:48 easement kernel: pfm_reset_pmu_state.2512: CPU1 [14494] <4> all_pmcs=0xfffe all_pmds=0x3ffff Oct 14 21:00:48 easement kernel: sys_perfmonctl.4835: CPU1 [14494] cmd=pfm_context_create ret=0 Oct 14 21:00:48 easement kernel: sys_perfmonctl.4728: CPU1 [14494] cmd=pfm_write_pmcs idx=1 valid=1 narg=0xffffffff argsz=160 count=1 Oct 14 21:00:48 easement kernel: sys_perfmonctl.4809: CPU1 [14494] spinlock_irq_save ctx e00000027e7dc000 by [14494] Oct 14 21:00:48 easement kernel: sys_perfmonctl.4809: CPU1 [14494] spinlocked ctx e00000027e7dc000 by [14494] Oct 14 21:00:48 easement kernel: pfm_check_task_state.4648: CPU1 [14494] context 4 no task, state=1 Oct 14 21:00:48 easement kernel: pfm_mck_reserved.92: CPU1 [14494] pmc[4]=0x801269, mask=0xfffff7f, reset=0x800000, val=0x801269 Oct 14 21:00:49 easement kernel: pfm_write_pmcs.3063: CPU1 [14494] pmc[4]=0x801269 loaded=0 access_pmu=0 all_pmcs=0xfffe used_pmds=0x10 eventid=0 smpl_pmds=0x0 reset_pmds=0x0 reloads_pmcs=0x0 used_monitors=0x10 ovfl_regs=0x0 Oct 14 21:00:49 easement kernel: sys_perfmonctl.4824: CPU1 [14494] [14494] context unlocked Oct 14 21:00:49 easement kernel: sys_perfmonctl.4825: CPU1 [14494] spinlock_irq_restore ctx e00000027e7dc000 by [14494] Oct 14 21:00:49 easement kernel: sys_perfmonctl.4835: CPU1 [14494] cmd=pfm_write_pmcs ret=0 Oct 14 21:00:49 easement kernel: sys_perfmonctl.4728: CPU1 [14494] cmd=pfm_write_pmds idx=2 valid=1 narg=0xffffffff argsz=160 count=1 Oct 14 21:00:49 easement kernel: sys_perfmonctl.4809: CPU1 [14494] spinlock_irq_save ctx e00000027e7dc000 by [14494] Oct 14 21:00:49 easement kernel: sys_perfmonctl.4809: CPU1 [14494] spinlocked ctx e00000027e7dc000 by [14494] Oct 14 21:00:49 easement kernel: pfm_check_task_state.4648: CPU1 [14494] context 4 no task, state=1 Oct 14 21:00:49 easement kernel: pfm_write_pmds.3246: CPU1 [14494] pmd[4]=0xfffffffffff92230 loaded=0 access_pmu=0, hw_value=0xfffffffffff92230 ctx_pmd=0xfffffffffff92230 short_reset=0xfffffffffff92230 long_reset=0x0 notify=N used_pmds=0x10 reset_pmds=0x0 reload_pmds=0x0 all_pmds=0x3ffff ovfl_regs=0x0 Oct 14 21:00:49 easement kernel: sys_perfmonctl.4824: CPU1 [14494] [14494] context unlocked Oct 14 21:00:49 easement kernel: sys_perfmonctl.4825: CPU1 [14494] spinlock_irq_restore ctx e00000027e7dc000 by [14494] Oct 14 21:00:49 easement kernel: sys_perfmonctl.4835: CPU1 [14494] cmd=pfm_write_pmds ret=0 Oct 14 21:00:49 easement kernel: sys_perfmonctl.4728: CPU1 [14494] cmd=pfm_context_load idx=16 valid=1 narg=0x1 argsz=32 count=1 Oct 14 21:00:49 easement kernel: sys_perfmonctl.4809: CPU1 [14494] spinlock_irq_save ctx e00000027e7dc000 by [14494] Oct 14 21:00:49 easement kernel: sys_perfmonctl.4809: CPU1 [14494] spinlocked ctx e00000027e7dc000 by [14494] Oct 14 21:00:49 easement kernel: pfm_check_task_state.4648: CPU1 [14494] context 4 no task, state=1 Oct 14 21:00:49 easement kernel: pfm_context_load.4176: CPU1 [14494] load_pid [14494] Oct 14 21:00:49 easement kernel: pfm_bad_permissions.2376: CPU1 [14494] [14494] cur: uid=0 gid=0 task: euid=0 suid=0 uid=0 egid=0 sgid=0 Oct 14 21:00:49 easement kernel: pfm_reserve_session.1314: CPU1 [14494] in sys_sessions=1 task_sessions=0 dbregs=0 syswide=1 cpu=1 Oct 14 21:00:49 easement kernel: pfm_reserve_session.1328: CPU1 [14494] reserving system wide session on CPU1 currently on CPU1 Oct 14 21:00:49 easement kernel: pfm_reserve_session.1344: CPU1 [14494] out sys_sessions=2 task_sessions=0 dbregs=0 syswide=1 cpu=1 Oct 14 21:00:49 easement kernel: pfm_context_load.4249: CPU1 [14494] [14494] before cmpxchg() old_ctx=0000000000000000 new_ctx=e00000027e7dc000 Oct 14 21:00:49 easement kernel: pfm_reset_msgq.791: CPU1 [14494] ctx=e00000027e7dc000 msgq reset Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1070: CPU1 [14494] mask=0x3ffff Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[0]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[1]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[2]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[3]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[4]=0x7ffffff92230 soft_val=0xffff800000000000 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[5]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[6]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[7]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[8]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[9]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[10]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[11]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[12]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[13]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[14]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[15]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[16]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmds.1091: CPU1 [14494] pmd[17]=0x0 soft_val=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1105: CPU1 [14494] mask=0xfffe Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[0]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[1]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[2]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[3]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[4]=0x801269 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[5]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[6]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[7]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[8]=0xffffffff3fffffff Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[9]=0xffffffff3ffffffc Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[10]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[11]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[12]=0x0 Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[13]=0x2078fefefefe Oct 14 21:00:49 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[14]=0xdb60db60db60db6 Oct 14 21:00:50 easement kernel: pfm_copy_pmcs.1110: CPU1 [14494] pmc[15]=0xfffffff0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1048: CPU1 [14494] mask=0x3ffff Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[0]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[1]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[2]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[3]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[4]=0x7ffffff92230 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[5]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[6]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[7]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[8]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[9]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[10]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[11]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[12]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[13]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[14]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[15]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[16]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmds.1053: CPU1 [14494] pmd[17]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1121: CPU1 [14494] mask=0xfffe Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[1]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[2]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[3]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[4]=0x801269 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[5]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[6]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[7]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[8]=0xffffffff3fffffff Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[9]=0xffffffff3ffffffc Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[10]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[11]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[12]=0x0 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[13]=0x2078fefefefe Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[14]=0xdb60db60db60db6 Oct 14 21:00:50 easement kernel: pfm_restore_pmcs.1125: CPU1 [14494] pmc[15]=0xfffffff0 Oct 14 21:00:50 easement kernel: pfm_context_load.4331: CPU1 [14494] context loaded on PMU for [14494] Oct 14 21:00:50 easement kernel: sys_perfmonctl.4824: CPU1 [14494] [14494] context unlocked Oct 14 21:00:50 easement kernel: sys_perfmonctl.4825: CPU1 [14494] spinlock_irq_restore ctx e00000027e7dc000 by [14494] Oct 14 21:00:50 easement kernel: sys_perfmonctl.4835: CPU1 [14494] cmd=pfm_context_load ret=0 root@easement:~# cat /proc/perfmon model : Itanium 2 fastctxsw : No ovfl_mask : 0x7fffffffffff CPU0 overflow intrs : 116503834 CPU0 overflow cycles : 50694507445 CPU0 overflow min : 402 CPU0 overflow max : 5942 CPU0 smpl handler calls : 116504132 CPU0 smpl handler cycles : 9703871349 CPU0 spurious intrs : 1 CPU0 sysupdt count : 539760 CPU0 sysupdt cycles : 7356391 CPU0 syst_wide : 1 CPU0 dcr_pp : 1 CPU0 exclude idle : 0 CPU0 owner : 14492 CPU0 context : e000000272172000 CPU0 activations : 0 CPU1 overflow intrs : 201786422 CPU1 overflow cycles : 89293382727 CPU1 overflow min : 389 CPU1 overflow max : 5287 CPU1 smpl handler calls : 201786981 CPU1 smpl handler cycles : 18386030814 CPU1 spurious intrs : 0 CPU1 sysupdt count : 606900 CPU1 sysupdt cycles : 8378147 CPU1 syst_wide : 1 CPU1 dcr_pp : 0 CPU1 exclude idle : 0 CPU1 owner : 14494 CPU1 context : e00000027e7dc000 CPU1 activations : 0 proc_sessions : 0 sys_sessions : 2 sys_use_dbregs : 0 ptrace_use_dbregs : 0 format : 77-7a-6e-61-20-65-73-69-74-6e-72-20-61-65-0a-6c oprofile_format format : 4d-72-be-c0-06-64-41-43-82-b4-d3-fd-27-24-3c-97 default_format |
From: John L. <le...@mo...> - 2003-10-15 15:39:45
|
On Wed, Oct 15, 2003 at 03:33:35AM +0100, John Levon wrote: > There is no error message, it all apparently succeeds but there's no > interrupts generated. It looks like this /might/ have been the signal handling problems. Everything is safe now wrt signals for the children, so if I see the problem again, I'll tell you. I saw the "machine freeze on shutdown" problem again today, AFAICS that is quite likely to be some kind of perfmon bug, or some interaction problem. regards john |
From: Stephane E. <er...@hp...> - 2003-10-15 16:05:28
|
John, On Wed, Oct 15, 2003 at 04:39:40PM +0100, John Levon wrote: > On Wed, Oct 15, 2003 at 03:33:35AM +0100, John Levon wrote: > > > There is no error message, it all apparently succeeds but there's no > > interrupts generated. > > It looks like this /might/ have been the signal handling problems. > Everything is safe now wrt signals for the children, so if I see the > problem again, I'll tell you. > I am not sure what signal you are talking about. In your case you should not use any perfmon-related signal. I thought your tool was blocked on read for your buffer. > I saw the "machine freeze on shutdown" problem again today, AFAICS that > is quite likely to be some kind of perfmon bug, or some interaction > problem. Do you have an exact test case to reproduce this? -- -Stephane |
From: John L. <le...@mo...> - 2003-10-15 16:33:04
|
On Wed, Oct 15, 2003 at 09:02:30AM -0700, Stephane Eranian wrote: > I am not sure what signal you are talking about. In your case you should > not use any perfmon-related signal. I thought your tool was blocked on > read for your buffer. This is oprofile-internal stuff we needed to handle --start and --stop correctly. Previously there was a chance that PFM_START could be sent at the wrong time. This has been fixed. It's possible that my probllems were related to that. > > I saw the "machine freeze on shutdown" problem again today, AFAICS that > > Do you have an exact test case to reproduce this? I'm afraid I don't, it happens randomly :( regards john -- Khendon's Law: If the same point is made twice by the same person, the thread is over. |
From: Stephane E. <er...@hp...> - 2003-10-15 16:51:25
Attachments:
pfdbg.c
|
John, On Wed, Oct 15, 2003 at 03:33:35AM +0100, John Levon wrote: > On Tue, Oct 14, 2003 at 07:05:29PM -0700, Stephane Eranian wrote: > > > I don't know what you mean by "properly". The library gives access to all > > the events are their "subevents", i.e. unit mask. Take the example of > > ALAT_CAPACITY_MISS, which is the main event. It has 3 unit masks possible: > > INT, FP, ALL. Therefore the library provides 3 "events": > > ALAT_CAPACITY_MISS_ALL > > ALAT_CAPACITY_MISS_FP > > ALAT_CAPACITY_MISS_INT > > For example, look at EXTERN_DP_PINS_0_TO_3 event. libpfm has : > > { "EXTERN_DP_PINS_0_TO_3_PIN0", {0x1009e}, 0xf0, 1, {0xf00000}, "DP Pins > 0-3 Asserted -- include pin0 assertion"}, > #define PME_ITA2_EXTERN_DP_PINS_0_TO_3_PIN1 220 > { "EXTERN_DP_PINS_0_TO_3_PIN1", {0x2009e}, 0xf0, 1, {0xf00000}, "DP Pins > 0-3 Asserted -- include pin1 assertion"}, > #define PME_ITA2_EXTERN_DP_PINS_0_TO_3_PIN2 221 > { "EXTERN_DP_PINS_0_TO_3_PIN2", {0x4009e}, 0xf0, 1, {0xf00000}, "DP Pins > 0-3 Asserted -- include pin2 assertion"}, > #define PME_ITA2_EXTERN_DP_PINS_0_TO_3_PIN3 222 > { "EXTERN_DP_PINS_0_TO_3_PIN3", {0x8009e}, 0xf0, 1, {0xf00000}, "DP Pins > 0-3 Asserted -- include pin3 assertion"}, > > i.e. one event for each bit on. But the unit mask is a *bitmask*, that > is, any combination of the PIN0-PIN3 bits can be set. The OProfile > equivalent has : > > event:0x9e counters:0,1,2,3 um:extern_dp_pins_0_to_3 minimum:5000 > name:EXTERN_DP_PINS_0_TO_3 : DP pins 0-3 asserted > > then : > > # CPU_IA64_2 Table 11-63 > name:extern_dp_pins_0_to_3 type:bitmask default:0xf > 0x1 PIN0 > 0x2 PIN1 > 0x4 PIN2 > 0x8 PIN3 > 0xf ALL > > So the user can specify a mask of PIN1+PIN3. This doeosn't seem possible > with libpfm ? > Yes, that an error in libpfm. In this case, it seems reasonable to have the possibility to OR the umask. And I already do this for some other events (such as I-EAR). However, here is a counter example: event: BE_RSE_BUBBLE (0x01) umask: BANK_SWITCH = bx001 : back-end stall by RSE due to bank switching umask: AR_DEP = bx010 : back-end stall by RSE due to AR dependencies umask: OVERFLOW = bx011 : back-end stall by RSE due to need to spill Having the umask=bx011 is not equivalent to bx001|bx010, they don't measure the same thing. I think we are both right here and the rule seems to be that if two umasks don't have any bits in common (marked 0 or 1) and there are ignored bits (x) then you can possibly OR them together. In my RSE event, if there was a umask of b1xxx, then it could be ORed with the three I mentioned. I will go over the event list again and see what I missed. > > There is no error message, it all apparently succeeds but there's no > interrupts generated. > > I append the log when starting up oprofile below on one of the times > where this failure mode happened. I can't include the overflow log, > since one of the CPUs is working, and it would kill the machine :) > What do you mean by 'kill the machine'? The log looks reasonable to me. However, it looks like not both CPU issued the PFM_START command. As for /proc/perfmon, it would be nice to have it reset before your bogus run. For this you can use the attached pfdbg program. You can use it to turn on/off the debug (not debug_ovfl). When you turn off debugging, it will reset the counts in /proc/perfmon. You need to link against libpfm. -- -Stephane |
From: John L. <le...@mo...> - 2003-10-15 17:11:49
|
On Wed, Oct 15, 2003 at 09:48:27AM -0700, Stephane Eranian wrote: > Yes, that an error in libpfm. In this case, it seems reasonable to have > the possibility to OR the umask. And I already do this for some other > events (such as I-EAR). However, here is a counter example: > > event: BE_RSE_BUBBLE (0x01) > umask: BANK_SWITCH = bx001 : back-end stall by RSE due to bank switching > umask: AR_DEP = bx010 : back-end stall by RSE due to AR dependencies > umask: OVERFLOW = bx011 : back-end stall by RSE due to need to spill Such unit masks are handled as type "exclusive" in OProfile : name:be_rse_bubble type:exclusive default:0x00 0x0 ALL 0x1 BANK_SWITCH 0x2 AR_DEP 0x3 OVERFLOW 0x4 UNDERFLOW 0x5 LOADRS Thus, only one of these values can be set, and it shows up with an appropriate name. Some of these "exclusive" types (on other CPUs at least) don't allow arbitrary or-ing, so we just list all possibilities for them, as opposed to the bitmask types. We've found this to be quite a usable way of doing things. > > I append the log when starting up oprofile below on one of the times > > where this failure mode happened. I can't include the overflow log, > > since one of the CPUs is working, and it would kill the machine :) > > > What do you mean by 'kill the machine'? Everything is dead - no serial console (excepting the ESC-( thing of course), no ping etc. > The log looks reasonable to me. However, it looks like > not both CPU issued the PFM_START command. Phil pointed this out, it must have been the signal bug we had (oprofile bug not perfmon) > As for /proc/perfmon, it would be nice to have it reset before your bogus > run. For this you can use the attached pfdbg program. You can use it Ah, great, didn't know this was possible. I'll see if I can get a detailed log if/when things go wrong. regards john |
From: John L. <le...@mo...> - 2003-10-15 19:26:21
|
On Wed, Oct 15, 2003 at 06:11:45PM +0100, John Levon wrote: > I'll see if I can get a detailed log if/when things go wrong. OK, here we go. This is a loaded machine, and no interrupts are occurring beyond an initial flurry. Before starting up oprofiled I did a pfmdbg --off then --on First, oprofiled.log oprofiled started Wed Oct 15 13:04:12 2003 kernel pointer size: 8 Running perfmon child on CPU0. Waiting on CPU0 Perfmon child up on CPU0 Running perfmon child on CPU1. Waiting on CPU1 Perfmon child up on CPU1 PFM_START on CPU0 PFM_START on CPU1 Shows that both children started up and injected a PFM_START successfully. This is /proc/perfmon afterwards : model : Itanium 2 fastctxsw : No ovfl_mask : 0x7fffffffffff CPU0 overflow intrs : 29 CPU0 overflow cycles : 27532 CPU0 overflow min : 448 CPU0 overflow max : 5197 CPU0 smpl handler calls : 29 CPU0 smpl handler cycles : 8385 CPU0 spurious intrs : 0 CPU0 sysupdt count : 6560 CPU0 sysupdt cycles : 103194 CPU0 syst_wide : 1 CPU0 dcr_pp : 1 CPU0 exclude idle : 0 CPU0 owner : 7935 CPU0 context : e00000027e97c000 CPU0 activations : 0 CPU1 overflow intrs : 425 CPU1 overflow cycles : 422963 CPU1 overflow min : 533 CPU1 overflow max : 4879 CPU1 smpl handler calls : 425 CPU1 smpl handler cycles : 129930 CPU1 spurious intrs : 0 CPU1 sysupdt count : 5320 CPU1 sysupdt cycles : 84516 CPU1 syst_wide : 1 CPU1 dcr_pp : 1 CPU1 exclude idle : 0 CPU1 owner : 7938 CPU1 context : e000000277bfa000 CPU1 activations : 0 proc_sessions : 0 sys_sessions : 2 sys_use_dbregs : 0 ptrace_use_dbregs : 0 format : 77-7a-6e-61-20-65-73-69-74-6e-72-20-61-65-0a-6c oprofile_format format : 4d-72-be-c0-06-64-41-43-82-b4-d3-fd-27-24-3c-97 default_format Note the initial flurry of interrupts, but then nothing. Finally, /var/log/messages during this startup period. Note some of it is a bit inter-mixed due to the two children racing to enter stuff in the kernel syslog. This shows that both children have called pfm_start properly etc. Sorry, it's basically impossible to get a debug_ovfl log - even with 20 ints/s (setting CPU_CYCLES:95000000 on my 900 Mhz CPUs), the machine gets so badly flooded it's not possible to get info out of. And I've yet to get overflow info for one of the "some interrupts on a CPU and then nothing" cases. Is there anybody else has actually tried a system-wide session with perfmon on 2.6 across all CPUs ? Is this new territory here ? Oct 15 13:04:12 easement kernel: perfmon debugging off (timing reset) Oct 15 13:04:12 easement kernel: perfmon debugging on (timing reset) Oct 15 13:04:12 easement kernel: sys_perfmonctl.4835: CPU0 [7831] cmd=pfm_debug ret=0 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4728: CPU0 [7935] cmd=pfm_context_create idx=8 valid=1 narg=0x1 argsz=128 count=1 Oct 15 13:04:12 easement kernel: pfm_ctx_getsize.2539: CPU0 [7935] arg_size=0 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4782: CPU0 [7935] [7935] restart_args sz=128 xtra_sz=0 Oct 15 13:04:12 easement kernel: pfm_context_alloc.854: CPU0 [7935] alloc ctx @e00000027e97c000 Oct 15 13:04:12 easement kernel: pfm_alloc_fd.2143: CPU0 [7935] new inode ino=73454 @e0000040fd8d0b00 Oct 15 13:04:12 easement kernel: pfm_setup_buffer_fmt.2436: CPU0 [7935] [7935] after validate(0x2,0,0000000000000000)=0 Oct 15 13:04:12 easement kernel: pfm_context_create.2716: CPU0 [7935] ctx=e00000027e97c000 flags=0x2 system=1 notify_block=0 excl_idle=0 unsecure=0 no_msg=0 ctx_fd=6 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[1]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[2]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[3]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[4]=0x800000 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[5]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[6]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[7]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[8]=0xffffffff3fffffff Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[9]=0xffffffff3ffffffc Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[10]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[11]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[12]=0x0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[13]=0x2078fefefefe Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[14]=0xdb60db60db60db6 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2476: CPU0 [7935] pmc[15]=0xfffffff0 Oct 15 13:04:12 easement kernel: pfm_reset_pmu_state.2512: CPU0 [7935] <6> all_pmcs=0xfffe all_pmds=0x3ffff Oct 15 13:04:12 easement kernel: sys_perfmonctl.4835: CPU0 [7935] cmd=pfm_context_create ret=0 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4728: CPU0 [7935] cmd=pfm_write_pmcs idx=1 valid=1 narg=0xffffffff argsz=160 count=1 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlock_irq_save ctx e00000027e97c000 by [7935] Oct 15 13:04:12 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlocked ctx e00000027e97c000 by [7935] Oct 15 13:04:12 easement kernel: pfm_check_task_state.4648: CPU0 [7935] context 6 no task, state=1 Oct 15 13:04:12 easement kernel: pfm_mck_reserved.92: CPU0 [7935] pmc[4]=0x801269, mask=0xfffff7f, reset=0x800000, val=0x801269 Oct 15 13:04:12 easement kernel: pfm_write_pmcs.3063: CPU0 [7935] pmc[4]=0x801269 loaded=0 access_pmu=0 all_pmcs=0xfffe used_pmds=0x10 eventid=0 smpl_pmds=0x0 reset_pmds=0x0 reloads_pmcs=0x0 used_monitors=0x10 ovfl_regs=0x0 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4824: CPU0 [7935] [7935] context unlocked Oct 15 13:04:12 easement kernel: sys_perfmonctl.4825: CPU0 [7935] spinlock_irq_restore ctx e00000027e97c000 by [7935] Oct 15 13:04:12 easement kernel: sys_perfmonctl.4835: CPU0 [7935] cmd=pfm_write_pmcs ret=0 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4728: CPU0 [7935] cmd=pfm_write_pmds idx=2 valid=1 narg=0xffffffff argsz=160 count=1 Oct 15 13:04:12 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlock_irq_save ctx e00000027e97c000 by [7935] Oct 15 13:04:12 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlocked ctx e00000027e97c000 by [7935] Oct 15 13:04:12 easement kernel: pfm_check_task_state.4648: CPU0 [7935] context 6 no task, state=1 Oct 15 13:04:12 easement kernel: pfm_write_pmds.3246: CPU0 [7935] pmd[4]=0xfffffffffff92230 loaded=0 access_pmu=0, hw_value=0xfffffffffff92230 ctx_pmd=0xfffffffffff92230 short_reset=0xfffffffffff92230 long_reset=0x0 notify=N used_pmds=0x10 reset_pmds=0x0 reload_pmds=0x0 all_pmds=0x3ffff ovfl_regs=0x0 Oct 15 13:04:13 easement kernel: sys_perfmonctl.4824: CPU0 [7935] [7935] context unlocked Oct 15 13:04:13 easement kernel: sys_perfmonctl.4825: CPU0 [7935] spinlock_irq_restore ctx e00000027e97c000 by [7935] Oct 15 13:04:13 easement kernel: sys_perfmonctl.4835: CPU0 [7935] cmd=pfm_write_pmds ret=0 Oct 15 13:04:13 easement kernel: sys_perfmonctl.4728: CPU0 [7935] cmd=pfm_context_load idx=16 valid=1 narg=0x1 argsz=32 count=1 Oct 15 13:04:13 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlock_irq_save ctx e00000027e97c000 by [7935] Oct 15 13:04:13 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlocked ctx e00000027e97c000 by [7935] Oct 15 13:04:13 easement kernel: pfm_check_task_state.4648: CPU0 [7935] context 6 no task, state=1 Oct 15 13:04:13 easement kernel: pfm_context_load.4176: CPU0 [7935] load_pid [7935] Oct 15 13:04:13 easement kernel: pfm_bad_permissions.2376: CPU0 [7935] [7935] cur: uid=0 gid=0 task: euid=0 suid=0 uid=0 egid=0 sgid=0 Oct 15 13:04:13 easement kernel: pfm_reserve_session.1314: CPU0 [7935] in sys_sessions=0 task_sessions=0 dbregs=0 syswide=1 cpu=0 Oct 15 13:04:13 easement kernel: pfm_reserve_session.1328: CPU0 [7935] reserving system wide session on CPU0 currently on CPU0 Oct 15 13:04:13 easement kernel: pfm_reserve_session.1344: CPU0 [7935] out sys_sessions=1 task_sessions=0 dbregs=0 syswide=1 cpu=0 Oct 15 13:04:13 easement kernel: pfm_context_load.4249: CPU0 [7935] [7935] before cmpxchg() old_ctx=0000000000000000 new_ctx=e00000027e97c000 Oct 15 13:04:13 easement kernel: pfm_reset_msgq.791: CPU0 [7935] ctx=e00000027e97c000 msgq reset Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1070: CPU0 [7935] mask=0x3ffff Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[0]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[1]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[2]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[3]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[4]=0x7ffffff92230 soft_val=0xffff800000000000 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[5]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[6]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[7]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[8]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[9]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[10]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[11]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[12]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[13]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[14]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[15]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[16]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmds.1091: CPU0 [7935] pmd[17]=0x0 soft_val=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1105: CPU0 [7935] mask=0xfffe Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[0]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[1]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[2]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[3]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[4]=0x801269 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[5]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[6]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[7]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[8]=0xffffffff3fffffff Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[9]=0xffffffff3ffffffc Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[10]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[11]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[12]=0x0 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[13]=0x2078fefefefe Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[14]=0xdb60db60db60db6 Oct 15 13:04:13 easement kernel: pfm_copy_pmcs.1110: CPU0 [7935] pmc[15]=0xfffffff0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1048: CPU0 [7935] mask=0x3ffff Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[0]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[1]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[2]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[3]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[4]=0x7ffffff92230 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[5]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[6]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[7]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[8]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[9]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[10]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[11]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[12]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[13]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[14]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[15]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[16]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmds.1053: CPU0 [7935] pmd[17]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1121: CPU0 [7935] mask=0xfffe Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[1]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[2]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[3]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[4]=0x801269 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[5]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[6]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[7]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[8]=0xffffffff3fffffff Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[9]=0xffffffff3ffffffc Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[10]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[11]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[12]=0x0 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[13]=0x2078fefefefe Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[14]=0xdb60db60db60db6 Oct 15 13:04:13 easement kernel: pfm_restore_pmcs.1125: CPU0 [7935] pmc[15]=0xfffffff0 oct 15 13:04:13 easement kernel: pfm_context_load.4331: CPU0 [7935] context loaded on PMU for [7935] Oct 15 13:04:13 easement kernel: sys_perfmonctl.4824: CPU0 [7935] [7935] context unlocked Oct 15 13:04:13 easement kernel: sys_perfmonctl.4825: CPU0 [7935] spinlock_irq_restore ctx e00000027e97c000 by [7935] Oct 15 13:04:13 easement kernel: sys_perfmonctl.4835: CPU0 [7935] cmd=pfm_context_load ret=0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4728: CPU1 [7938] cmd=pfm_context_create idx=8 valid=1 narg=0x1 argsz=128 count=1 Oct 15 13:04:14 easement kernel: pfm_ctx_getsize.2539: CPU1 [7938] arg_size=0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4782: CPU1 [7938] [7938] restart_args sz=128 xtra_sz=0 Oct 15 13:04:14 easement kernel: pfm_context_alloc.854: CPU1 [7938] alloc ctx @e000000277bfa000 Oct 15 13:04:14 easement kernel: pfm_alloc_fd.2143: CPU1 [7938] new inode ino=73457 @e0000040fd8d0d80 Oct 15 13:04:14 easement kernel: pfm_setup_buffer_fmt.2436: CPU1 [7938] [7938] after validate(0x2,0,0000000000000000)=0 Oct 15 13:04:14 easement kernel: pfm_context_create.2716: CPU1 [7938] ctx=e000000277bfa000 flags=0x2 system=1 notify_block=0 excl_idle=0 unsecure=0 no_msg=0 ctx_fd=6 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[1]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[2]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[3]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[4]=0x800000 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[5]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[6]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[7]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[8]=0xffffffff3fffffff Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[9]=0xffffffff3ffffffc Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[10]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[11]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[12]=0x0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[13]=0x2078fefefefe Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[14]=0xdb60db60db60db6 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2476: CPU1 [7938] pmc[15]=0xfffffff0 Oct 15 13:04:14 easement kernel: pfm_reset_pmu_state.2512: CPU1 [7938] <6> all_pmcs=0xfffe all_pmds=0x3ffff Oct 15 13:04:14 easement kernel: sys_perfmonctl.4835: CPU1 [7938] cmd=pfm_context_create ret=0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4728: CPU1 [7938] cmd=pfm_write_pmcs idx=1 valid=1 narg=0xffffffff argsz=160 count=1 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlock_irq_save ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlocked ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: pfm_check_task_state.4648: CPU1 [7938] context 6 no task, state=1 Oc:1 t 15 13:04:14 easement kernel: pfm_mck_reserved.92: CPU1 [7938] pmc[4]=0x801269, mask=0xfffff7f, reset=0x800000, val=0x801269 Oct 15 13:04:14 easement kernel: pfm_write_pmcs.3063: CPU1 [7938] pmc[4]=0x801269 loaded=0 access_pmu=0 all_pmcs=0xfffe used_pmds=0x10 eventid=0 smpl_pmds=0x0 reset_pmds=0x0 reloads_pmcs=0x0 used_monitors=0x10 ovfl_regs=0x0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4824: CPU1 [7938] [7938] context unlocked Oct 15 13:04:14 easement kernel: sys_perfmonctl.4825: CPU1 [7938] spinlock_irq_restore ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: sys_perfmonctl.4835: CPU1 [7938] cmd=pfm_write_pmcs ret=0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4728: CPU1 [7938] cmd=pfm_write_pmds idx=2 valid=1 narg=0xffffffff argsz=160 count=1 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlock_irq_save ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlocked ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: pfm_check_task_state.4648: CPU1 [7938] context 6 no task, state=1 Oct 15 13:04:14 easement kernel: pfm_write_pmds.3246: CPU1 [7938] pmd[4]=0xfffffffffff92230 loaded=0 access_pmu=0, hw_value=0xfffffffffff92230 ctx_pmd=0xfffffffffff92230 short_reset=0xfffffffffff92230 long_reset=0x0 notify=N used_pmds=0x10 reset_pmds=0x0 reload_pmds=0x0 all_pmds=0x3ffff ovfl_regs=0x0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4824: CPU1 [7938] [7938] context unlocked Oct 15 13:04:14 easement kernel: sys_perfmonctl.4825: CPU1 [7938] spinlock_irq_restore ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: sys_perfmonctl.4835: CPU1 [7938] cmd=pfm_write_pmds ret=0 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4728: CPU1 [7938] cmd=pfm_context_load idx=16 valid=1 narg=0x1 argsz=32 count=1 Oct 15 13:04:14 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlock_irq_save ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlocked ctx e000000277bfa000 by [7938] Oct 15 13:04:14 easement kernel: pfm_check_task_state.4648: CPU1 [7938] context 6 no task, state=1 Oct 15 13:04:14 easement kernel: pfm_context_load.4176: CPU1 [7938] load_pid [7938] Oct 15 13:04:14 easement kernel: pfm_bad_permissions.2376: CPU1 [7938] [7938] cur: uid=0 gid=0 task: euid=0 suid=0 uid=0 egid=0 sgid=0 Oct 15 13:04:14 easement kernel: pfm_reserve_session.1314: CPU1 [7938] in sys_sessions=1 task_sessions=0 dbregs=0 syswide=1 cpu=1 Oct 15 13:04:14 easement kernel: pfm_reserve_session.1328: CPU1 [7938] reserving system wide session on CPU1 currently on CPU1 Oct 15 13:04:14 easement kernel: pfm_reserve_session.1344: CPU1 [7938] out sys_sessions=2 task_sessions=0 dbregs=0 syswide=1 cpu=1 Oct 15 13:04:14 easement kernel: pfm_context_load.4249: CPU1 [7938] [7938] before cmpxchg() old_ctx=0000000000000000 new_ctx=e000000277bfa000 Oct 15 13:04:14 easement kernel: pfm_reset_msgq.791: CPU1 [7938] ctx=e000000277bfa000 msgq reset Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1070: CPU1 [7938] mask=0x3ffff Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[0]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[1]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[2]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[3]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[4]=0x7ffffff92230 soft_val=0xffff800000000000 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[5]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[6]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[7]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[8]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[9]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[10]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[11]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[12]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[13]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[14]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[15]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[16]=0x0 soft_val=0x0 Oct 15 13:04:14 easement kernel: pfm_copy_pmds.1091: CPU1 [7938] pmd[17]=0x0 soft_val=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1105: CPU1 [7938] mask=0xfffe Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[0]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[1]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[2]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[3]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[4]=0x801269 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[5]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[6]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[7]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[8]=0xffffffff3fffffff Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[9]=0xffffffff3ffffffc Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[10]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[11]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[12]=0x0 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[13]=0x2078fefefefe Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[14]=0xdb60db60db60db6 Oct 15 13:04:15 easement kernel: pfm_copy_pmcs.1110: CPU1 [7938] pmc[15]=0xfffffff0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1048: CPU1 [7938] mask=0x3ffff Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[0]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[1]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[2]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[3]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[4]=0x7ffffff92230 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[5]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[6]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[7]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[8]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[9]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[10]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[11]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[12]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[13]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[14]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[15]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[16]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmds.1053: CPU1 [7938] pmd[17]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1121: CPU1 [7938] mask=0xfffe Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[1]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[2]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[3]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[4]=0x801269 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[5]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[6]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[7]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[8]=0xffffffff3fffffff Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[9]=0xffffffff3ffffffc Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[10]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[11]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[12]=0x0 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[13]=0x2078fefefefe Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[14]=0xdb60db60db60db6 Oct 15 13:04:15 easement kernel: pfm_restore_pmcs.1125: CPU1 [7938] pmc[15]=0xfffffff0 Oct 15 13:04:15 easement kernel: pfm_context_load.4331: CPU1 [7938] context loaded on PMU for [7938] Oct 15 13:04:15 easement kernel: sys_perfmonctl.4824: CPU1 [7938] [7938] context unlocked Oct 15 13:04:15 easement kernel: sys_perfmonctl.4825: CPU1 [7938] spinlock_irq_restore ctx e000000277bfa000 by [7938] Oct 15 13:04:15 easement kernel: sys_perfmonctl.4835: CPU1 [7938] cmd=pfm_context_load ret=0 Oct 15 13:04:16 easement kernel: sys_perfmonctl.4728: CPU0 [7935] sys_perfmonctl.4728: CPU1 [7938] cmd=pfm_start idx=5 valid=1 narg=0x0 argsz=0 count=0 Oct 15 13:04:16 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlock_irq_save ctx e000000277bfa000 by [7938] Oct 15 13:04:16 easement kernel: sys_perfmonctl.4809: CPU1 [7938] spinlocked ctx e000000277bfa000 by [7938] Oct 15 13:04:16 easement kernel: pfm_check_task_state.4656: CPU1 [7938] context 6 state=2 [7938] task_state=0 must_stop=8 Oct 15 13:04:16 easement kernel: sys_perfmonctl.4824: CPU1 [7938] [7938] context unlocked Oct 15 13:04:16 easement kernel: sys_perfmonctl.4825: CPU1 [7938] spinlock_irq_restore ctx e000000277bfa000 by [7938] Oct 15 13:04:16 easement kernel: sys_perfmonctl.4835: CPU1 [7938] cmd=pfm_start ret=0 Oct 15 13:04:16 easement kernel: cmd=pfm_start idx=5 valid=1 narg=0x0 argsz=0 count=0 Oct 15 13:04:16 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlock_irq_save ctx e00000027e97c000 by [7935] Oct 15 13:04:16 easement kernel: sys_perfmonctl.4809: CPU0 [7935] spinlocked ctx e00000027e97c000 by [7935] Oct 15 13:04:16 easement kernel: pfm_check_task_state.4656: CPU0 [7935] context 6 state=2 [7935] task_state=0 must_stop=8 Oct 15 13:04:16 easement kernel: sys_perfmonctl.4824: CPU0 [7935] [7935] context unlocked Oct 15 13:04:16 easement kernel: sys_perfmonctl.4825: CPU0 [7935] spinlock_irq_restore ctx e00000027e97c000 by [7935] Oct 15 13:04:16 easement kernel: sys_perfmonctl.4835: CPU0 [7935] cmd=pfm_start ret=0 -- Khendon's Law: If the same point is made twice by the same person, the thread is over. |
From: Stephane E. <er...@hp...> - 2003-10-15 21:36:54
|
John, On Wed, Oct 15, 2003 at 08:21:51PM +0100, John Levon wrote: > On Wed, Oct 15, 2003 at 06:11:45PM +0100, John Levon wrote: > > > I'll see if I can get a detailed log if/when things go wrong. > > OK, here we go. This is a loaded machine, and no interrupts are > occurring beyond an initial flurry. > By initial, you mean until your buffer fills up the first time? Looks like CPU1 is getting more overflow than CPU0 from /proc/perfmon. > model : Itanium 2 > fastctxsw : No > ovfl_mask : 0x7fffffffffff > CPU0 overflow intrs : 29 > CPU0 overflow cycles : 27532 > CPU0 overflow min : 448 > CPU0 overflow max : 5197 > CPU0 smpl handler calls : 29 > CPU0 smpl handler cycles : 8385 > CPU0 spurious intrs : 0 > CPU0 sysupdt count : 6560 > CPU0 sysupdt cycles : 103194 > CPU0 syst_wide : 1 > CPU0 dcr_pp : 1 > CPU0 exclude idle : 0 > CPU0 owner : 7935 > CPU0 context : e00000027e97c000 > CPU0 activations : 0 > CPU1 overflow intrs : 425 > CPU1 overflow cycles : 422963 > CPU1 overflow min : 533 > CPU1 overflow max : 4879 > CPU1 smpl handler calls : 425 > CPU1 smpl handler cycles : 129930 > CPU1 spurious intrs : 0 > CPU1 sysupdt count : 5320 > CPU1 sysupdt cycles : 84516 > CPU1 syst_wide : 1 > CPU1 dcr_pp : 1 > CPU1 exclude idle : 0 > CPU1 owner : 7938 > CPU1 context : e000000277bfa000 > CPU1 activations : 0 > proc_sessions : 0 > sys_sessions : 2 > sys_use_dbregs : 0 > ptrace_use_dbregs : 0 > format : 77-7a-6e-61-20-65-73-69-74-6e-72-20-61-65-0a-6c oprofile_format > format : 4d-72-be-c0-06-64-41-43-82-b4-d3-fd-27-24-3c-97 default_format > > Finally, /var/log/messages during this startup period. Note some of it > is a bit inter-mixed due to the two children racing to enter stuff in > the kernel syslog. > Yes, I know about this problem. I think it would be useful for you to limit your tool to monitoring only on one CPU. > Sorry, it's basically impossible to get a debug_ovfl log - even with > 20 ints/s (setting CPU_CYCLES:95000000 on my 900 Mhz CPUs), the machine > gets so badly flooded it's not possible to get info out of. And I've yet > to get overflow info for one of the "some interrupts on a CPU and then > nothing" cases. That is not normal, I just tried that on my 2-way 1GHz/McKinley using a serial console and there is not major problem. I even set the period to 1 int/sec. The log you sent is still showing a higher period of 450000 cycles between samples. Go to once per second to make this more mangeable. > > Is there anybody else has actually tried a system-wide session with > perfmon on 2.6 across all CPUs ? Is this new territory here ? > Yes, I do that every day and it appears to work fine. -- -Stephane |
From: John L. <le...@mo...> - 2003-10-16 00:11:18
|
On Wed, Oct 15, 2003 at 02:33:56PM -0700, Stephane Eranian wrote: > > OK, here we go. This is a loaded machine, and no interrupts are > > occurring beyond an initial flurry. > > > By initial, you mean until your buffer fills up the first time? By initial I mean "after starting up perfmon with PFM_START". I.e. at the beginning there's a few interrupts, and then nothing at all after that. > Yes, I do that every day and it appears to work fine. Do you have a test program I can run please ? regards john -- Khendon's Law: If the same point is made twice by the same person, the thread is over. |
From: John L. <le...@mo...> - 2003-10-16 00:29:20
|
On Thu, Oct 16, 2003 at 01:07:52AM +0100, John Levon wrote: > > By initial, you mean until your buffer fills up the first time? > > By initial I mean "after starting up perfmon with PFM_START". I.e. at > the beginning there's a few interrupts, and then nothing at all after > that. I tried to get the overflow debug going again - same problem. Basically when I try to kill oprofiled, it jumps to 99% CPU (!) and is unkillable. Serial console-ing in shows that the log is massively filled with overflow debug messages, even though it's only supposed to be 1 interrupt a second. I have no idea, quite frankly; I'm just going to disable the code. regards, john -- Khendon's Law: If the same point is made twice by the same person, the thread is over. |