From: Andi K. <an...@fi...> - 2008-08-20 16:40:33
|
This patchkit implements architectural perfmon support in oprofile. This allows to do generic profiling of a few standard events in all newer Intel CPUs, including Atom and Nehalem. The CPU describes its event in CPUID so they can be used without knowing anything about the CPU. The code requires some changes to the oprofile userland, which I am posting separately to the oprofile list. -Andi |
From: Andi K. <an...@fi...> - 2008-08-20 16:40:33
|
From: Andi Kleen <ak...@li...> It's actually useless now, but document it anyways. Signed-off-by: Andi Kleen <ak...@li...> --- Documentation/kernel-parameters.txt | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 10c8b1b..5e77e1a 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -1491,6 +1491,12 @@ and is between 256 and 4096 characters. It is defined in the file in oprofile on Intel CPUs. The kernel selects the correct default on its own. + oprofile.p4force=1 [X86] + On Intel NetBurst CPUs assume new models are compatible + to older ones. This might allow oprofile to be used when + the kernel doesn't know the CPU, but is slightly dangerous. + Should be obsolete by now. + osst= [HW,SCSI] SCSI Tape Driver Format: <buffer_size>,<write_threshold> See also Documentation/scsi/st.txt. -- 1.5.6 |
From: Robert R. <rob...@am...> - 2008-09-25 20:12:11
|
On 20.08.08 18:40:33, Andi Kleen wrote: > From: Andi Kleen <ak...@li...> > > It's actually useless now, but document it anyways. We should rework/remove this (maybe later), if it makes no longer sense to keep it. If we had have a force_cpu_type implementation this could be thrown away. But as long as it is in it's better to have it documented. -Robert > > Signed-off-by: Andi Kleen <ak...@li...> > --- > Documentation/kernel-parameters.txt | 6 ++++++ > 1 files changed, 6 insertions(+), 0 deletions(-) > > diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt > index 10c8b1b..5e77e1a 100644 > --- a/Documentation/kernel-parameters.txt > +++ b/Documentation/kernel-parameters.txt > @@ -1491,6 +1491,12 @@ and is between 256 and 4096 characters. It is defined in the file > in oprofile on Intel CPUs. The kernel selects the > correct default on its own. > > + oprofile.p4force=1 [X86] > + On Intel NetBurst CPUs assume new models are compatible > + to older ones. This might allow oprofile to be used when > + the kernel doesn't know the CPU, but is slightly dangerous. > + Should be obsolete by now. > + > osst= [HW,SCSI] SCSI Tape Driver > Format: <buffer_size>,<write_threshold> > See also Documentation/scsi/st.txt. > -- > 1.5.6 > > -- Advanced Micro Devices, Inc. Operating System Research Center email: rob...@am... |
From: Andi K. <an...@fi...> - 2008-08-20 16:40:35
|
From: Andi Kleen <ak...@li...> allow to modify it at runtime Signed-off-by: Andi Kleen <ak...@li...> --- arch/x86/oprofile/op_x86_model.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/oprofile/op_x86_model.h b/arch/x86/oprofile/op_x86_model.h index 45b605f..575e08e 100644 --- a/arch/x86/oprofile/op_x86_model.h +++ b/arch/x86/oprofile/op_x86_model.h @@ -32,8 +32,8 @@ struct pt_regs; * various x86 CPU models' perfctr support. */ struct op_x86_model_spec { - unsigned int const num_counters; - unsigned int const num_controls; + unsigned int num_counters; + unsigned int num_controls; void (*fill_in_addresses)(struct op_msrs * const msrs); void (*setup_ctrs)(struct op_msrs const * const msrs); int (*check_ctrs)(struct pt_regs * const regs, -- 1.5.6 |
From: Andi K. <an...@fi...> - 2008-08-20 16:40:35
|
From: Andi Kleen <ak...@li...> This essentially reverts Linus' earlier 4b9f12a3779c548b68bc9af7d94030868ad3aa1b commit. Nehalem is not core_2, so it shouldn't be reported as such. However with the earlier arch perfmon patch it will fall back to arch perfmon mode now, so there is no need to fake it as core_2. The only drawback is that Linus will need to patch the arch perfmon support into his oprofile binary now, but I think he can do that. Signed-off-by: Andi Kleen <ak...@li...> --- arch/x86/oprofile/nmi_int.c | 3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c index 6438c32..669a713 100644 --- a/arch/x86/oprofile/nmi_int.c +++ b/arch/x86/oprofile/nmi_int.c @@ -418,9 +418,6 @@ static int __init ppro_init(char **cpu_type) case 15: case 23: *cpu_type = "i386/core_2"; break; - case 26: - *cpu_type = "i386/core_2"; - break; default: /* Unknown */ return 0; -- 1.5.6 |
From: Robert R. <rob...@am...> - 2008-09-25 20:05:46
|
On 20.08.08 18:40:32, Andi Kleen wrote: > From: Andi Kleen <ak...@li...> > > This essentially reverts Linus' earlier 4b9f12a3779c548b68bc9af7d94030868ad3aa1b > commit. Nehalem is not core_2, so it shouldn't be reported as such. > However with the earlier arch perfmon patch it will fall back to > arch perfmon mode now, so there is no need to fake it as core_2. > The only drawback is that Linus will need to patch the arch perfmon > support into his oprofile binary now, but I think he can do that. > > Signed-off-by: Andi Kleen <ak...@li...> I will send this patch upstream together with the architectural perfmon implementation and when the userland part is upstream. -Robert > --- > arch/x86/oprofile/nmi_int.c | 3 --- > 1 files changed, 0 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c > index 6438c32..669a713 100644 > --- a/arch/x86/oprofile/nmi_int.c > +++ b/arch/x86/oprofile/nmi_int.c > @@ -418,9 +418,6 @@ static int __init ppro_init(char **cpu_type) > case 15: case 23: > *cpu_type = "i386/core_2"; > break; > - case 26: > - *cpu_type = "i386/core_2"; > - break; > default: > /* Unknown */ > return 0; > -- > 1.5.6 > > -- Advanced Micro Devices, Inc. Operating System Research Center email: rob...@am... |
From: Andi K. <an...@fi...> - 2008-08-20 16:40:34
|
From: Andi Kleen <ak...@li...> Newer Intel CPUs (Core1+) have support for architectural events described in CPUID 0xA. See the IA32 SDM Vol3b.18 for details. The advantage of this is that it can be done without knowing about the specific CPU, because the CPU describes by itself what performance events are supported. This is only a fallback because only a limited set of 6 events are supported. This allows to do profiling on Nehalem and on Atom systems (later not tested) This patch implements support for that in oprofile's Intel Family 6 profiling module. It also has the advantage of supporting an arbitary number of events now as reported by the CPU. Also allow arbitary counter widths >32bit while we're at it. Requires a patched oprofile userland to support the new architecture. Signed-off-by: Andi Kleen <ak...@li...> --- Documentation/kernel-parameters.txt | 5 ++ arch/x86/oprofile/nmi_int.c | 32 +++++++++-- arch/x86/oprofile/op_model_ppro.c | 104 +++++++++++++++++++++++++++------- arch/x86/oprofile/op_x86_model.h | 3 + 4 files changed, 116 insertions(+), 28 deletions(-) diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 056742c..10c8b1b 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -1486,6 +1486,11 @@ and is between 256 and 4096 characters. It is defined in the file oprofile.timer= [HW] Use timer interrupt instead of performance counters + oprofile.force_arch_perfmon=1 [X86] + Force use of architectural perfmon performance counters + in oprofile on Intel CPUs. The kernel selects the + correct default on its own. + osst= [HW,SCSI] SCSI Tape Driver Format: <buffer_size>,<write_threshold> See also Documentation/scsi/st.txt. diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c index 36d2f92..6438c32 100644 --- a/arch/x86/oprofile/nmi_int.c +++ b/arch/x86/oprofile/nmi_int.c @@ -430,6 +430,19 @@ static int __init ppro_init(char **cpu_type) return 1; } +static int force_arch_perfmon; +module_param(force_arch_perfmon, int, 0); + +static int __init arch_perfmon_init(char **cpu_type) +{ + if (!cpu_has_arch_perfmon) + return 0; + *cpu_type = "i386/arch_perfmon"; + model = &op_arch_perfmon_spec; + arch_perfmon_setup_counters(); + return 1; +} + /* in order to get sysfs right */ static int using_nmi; @@ -437,7 +450,7 @@ int __init op_nmi_init(struct oprofile_operations *ops) { __u8 vendor = boot_cpu_data.x86_vendor; __u8 family = boot_cpu_data.x86; - char *cpu_type; + char *cpu_type = NULL; if (!cpu_has_apic) return -ENODEV; @@ -467,22 +480,29 @@ int __init op_nmi_init(struct oprofile_operations *ops) break; case X86_VENDOR_INTEL: + if (force_arch_perfmon) { + if (!arch_perfmon_init(&cpu_type)) + return -ENODEV; + break; + } + switch (family) { /* Pentium IV */ case 0xf: - if (!p4_init(&cpu_type)) - return -ENODEV; + p4_init(&cpu_type); break; /* A P6-class processor */ case 6: - if (!ppro_init(&cpu_type)) - return -ENODEV; + ppro_init(&cpu_type); break; default: - return -ENODEV; + break; } + + if (!cpu_type && !arch_perfmon_init(&cpu_type)) + return -ENODEV; break; default: diff --git a/arch/x86/oprofile/op_model_ppro.c b/arch/x86/oprofile/op_model_ppro.c index eff431f..12e207a 100644 --- a/arch/x86/oprofile/op_model_ppro.c +++ b/arch/x86/oprofile/op_model_ppro.c @@ -1,32 +1,34 @@ /* * @file op_model_ppro.h - * pentium pro / P6 model-specific MSR operations + * Family 6 perfmon and architectural perfmon MSR operations * * @remark Copyright 2002 OProfile authors + * @remark Copyright 2008 Intel Corporation * @remark Read the file COPYING * * @author John Levon * @author Philippe Elie * @author Graydon Hoare + * @author Andi Kleen */ #include <linux/oprofile.h> +#include <linux/slab.h> #include <asm/ptrace.h> #include <asm/msr.h> #include <asm/apic.h> #include <asm/nmi.h> +#include <asm/intel_arch_perfmon.h> #include "op_x86_model.h" #include "op_counter.h" -#define NUM_COUNTERS 2 -#define NUM_CONTROLS 2 +static int num_counters = 2; +static int counter_width = 32; #define CTR_IS_RESERVED(msrs, c) (msrs->counters[(c)].addr ? 1 : 0) #define CTR_READ(l, h, msrs, c) do {rdmsr(msrs->counters[(c)].addr, (l), (h)); } while (0) -#define CTR_32BIT_WRITE(l, msrs, c) \ - do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), 0); } while (0) -#define CTR_OVERFLOWED(n) (!((n) & (1U<<31))) +#define CTR_OVERFLOWED(n) (!((n) & (1U<<(counter_width-1)))) #define CTRL_IS_RESERVED(msrs, c) (msrs->controls[(c)].addr ? 1 : 0) #define CTRL_READ(l, h, msrs, c) do {rdmsr((msrs->controls[(c)].addr), (l), (h)); } while (0) @@ -40,20 +42,20 @@ #define CTRL_SET_UM(val, m) (val |= (m << 8)) #define CTRL_SET_EVENT(val, e) (val |= e) -static unsigned long reset_value[NUM_COUNTERS]; +static u64 *reset_value; static void ppro_fill_in_addresses(struct op_msrs * const msrs) { int i; - for (i = 0; i < NUM_COUNTERS; i++) { + for (i = 0; i < num_counters; i++) { if (reserve_perfctr_nmi(MSR_P6_PERFCTR0 + i)) msrs->counters[i].addr = MSR_P6_PERFCTR0 + i; else msrs->counters[i].addr = 0; } - for (i = 0; i < NUM_CONTROLS; i++) { + for (i = 0; i < num_counters; i++) { if (reserve_evntsel_nmi(MSR_P6_EVNTSEL0 + i)) msrs->controls[i].addr = MSR_P6_EVNTSEL0 + i; else @@ -67,8 +69,22 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs) unsigned int low, high; int i; + if (!reset_value) { + reset_value = kmalloc(sizeof(unsigned) * num_counters, + GFP_ATOMIC); + if (!reset_value) + return; + } + + if (cpu_has_arch_perfmon) { + union cpuid10_eax eax; + eax.full = cpuid_eax(0xa); + if (counter_width < eax.split.bit_width) + counter_width = eax.split.bit_width; + } + /* clear all counters */ - for (i = 0 ; i < NUM_CONTROLS; ++i) { + for (i = 0 ; i < num_counters; ++i) { if (unlikely(!CTRL_IS_RESERVED(msrs, i))) continue; CTRL_READ(low, high, msrs, i); @@ -77,18 +93,18 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs) } /* avoid a false detection of ctr overflows in NMI handler */ - for (i = 0; i < NUM_COUNTERS; ++i) { + for (i = 0; i < num_counters; ++i) { if (unlikely(!CTR_IS_RESERVED(msrs, i))) continue; - CTR_32BIT_WRITE(1, msrs, i); + wrmsrl(msrs->counters[i].addr, -1LL); } /* enable active counters */ - for (i = 0; i < NUM_COUNTERS; ++i) { + for (i = 0; i < num_counters; ++i) { if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs, i))) { reset_value[i] = counter_config[i].count; - CTR_32BIT_WRITE(counter_config[i].count, msrs, i); + wrmsrl(msrs->counters[i].addr, -reset_value[i]); CTRL_READ(low, high, msrs, i); CTRL_CLEAR(low); @@ -111,13 +127,13 @@ static int ppro_check_ctrs(struct pt_regs * const regs, unsigned int low, high; int i; - for (i = 0 ; i < NUM_COUNTERS; ++i) { + for (i = 0 ; i < num_counters; ++i) { if (!reset_value[i]) continue; CTR_READ(low, high, msrs, i); if (CTR_OVERFLOWED(low)) { oprofile_add_sample(regs, i); - CTR_32BIT_WRITE(reset_value[i], msrs, i); + wrmsrl(msrs->counters[i].addr, -reset_value[i]); } } @@ -141,7 +157,7 @@ static void ppro_start(struct op_msrs const * const msrs) unsigned int low, high; int i; - for (i = 0; i < NUM_COUNTERS; ++i) { + for (i = 0; i < num_counters; ++i) { if (reset_value[i]) { CTRL_READ(low, high, msrs, i); CTRL_SET_ACTIVE(low); @@ -156,7 +172,7 @@ static void ppro_stop(struct op_msrs const * const msrs) unsigned int low, high; int i; - for (i = 0; i < NUM_COUNTERS; ++i) { + for (i = 0; i < num_counters; ++i) { if (!reset_value[i]) continue; CTRL_READ(low, high, msrs, i); @@ -169,21 +185,65 @@ static void ppro_shutdown(struct op_msrs const * const msrs) { int i; - for (i = 0 ; i < NUM_COUNTERS ; ++i) { + for (i = 0 ; i < num_counters ; ++i) { if (CTR_IS_RESERVED(msrs, i)) release_perfctr_nmi(MSR_P6_PERFCTR0 + i); } - for (i = 0 ; i < NUM_CONTROLS ; ++i) { + for (i = 0 ; i < num_counters ; ++i) { if (CTRL_IS_RESERVED(msrs, i)) release_evntsel_nmi(MSR_P6_EVNTSEL0 + i); } + if (reset_value) { + kfree(reset_value); + reset_value = NULL; + } } struct op_x86_model_spec const op_ppro_spec = { - .num_counters = NUM_COUNTERS, - .num_controls = NUM_CONTROLS, + .num_counters = 2, + .num_controls = 2, + .fill_in_addresses = &ppro_fill_in_addresses, + .setup_ctrs = &ppro_setup_ctrs, + .check_ctrs = &ppro_check_ctrs, + .start = &ppro_start, + .stop = &ppro_stop, + .shutdown = &ppro_shutdown +}; + +/* + * Architectural performance monitoring. + * + * Newer Intel CPUs (Core1+) have support for architectural + * events described in CPUID 0xA. See the IA32 SDM Vol3b.18 for details. + * The advantage of this is that it can be done without knowing about + * the specific CPU. + */ + +void arch_perfmon_setup_counters(void) +{ + union cpuid10_eax eax; + + eax.full = cpuid_eax(0xa); + + /* Workaround for BIOS bugs in 6/15. Taken from perfmon2 */ + if (eax.split.version_id == 0 && current_cpu_data.x86 == 6 && + current_cpu_data.x86_model == 15) { + eax.split.version_id = 2; + eax.split.num_counters = 2; + eax.split.bit_width = 40; + } + + num_counters = eax.split.num_counters; + + op_arch_perfmon_spec.num_counters = num_counters; + op_arch_perfmon_spec.num_controls = num_counters; +} + +struct op_x86_model_spec op_arch_perfmon_spec = { + /* num_counters/num_controls filled in at runtime */ .fill_in_addresses = &ppro_fill_in_addresses, + /* user space does the cpuid check for available events */ .setup_ctrs = &ppro_setup_ctrs, .check_ctrs = &ppro_check_ctrs, .start = &ppro_start, diff --git a/arch/x86/oprofile/op_x86_model.h b/arch/x86/oprofile/op_x86_model.h index 575e08e..68c2bb9 100644 --- a/arch/x86/oprofile/op_x86_model.h +++ b/arch/x86/oprofile/op_x86_model.h @@ -47,5 +47,8 @@ extern struct op_x86_model_spec const op_ppro_spec; extern struct op_x86_model_spec const op_p4_spec; extern struct op_x86_model_spec const op_p4_ht2_spec; extern struct op_x86_model_spec const op_athlon_spec; +extern struct op_x86_model_spec op_arch_perfmon_spec; + +extern void arch_perfmon_setup_counters(void); #endif /* OP_X86_MODEL_H */ -- 1.5.6 |
From: Robert R. <rob...@am...> - 2008-09-25 19:32:58
|
Andi, I have uploaded all pending OProfile patches to my kernel.org repository. As we already talked about this, there are changes in that implement model specific init/exit functions. Please change your patch in a way that it uses these functions. This will make your implementation cleaner. I will also send some more comments to the patches itself. It would help me if you could send the new patches relative to my tree. Thanks a lot, -Robert On 20.08.08 18:40:29, Andi Kleen wrote: > > This patchkit implements architectural perfmon support in oprofile. > This allows to do generic profiling of a few standard events in all > newer Intel CPUs, including Atom and Nehalem. The CPU describes > its event in CPUID so they can be used without knowing anything > about the CPU. > > The code requires some changes to the oprofile userland, which > I am posting separately to the oprofile list. > > -Andi > -- Advanced Micro Devices, Inc. Operating System Research Center email: rob...@am... |
From: Robert R. <rob...@am...> - 2008-09-25 20:02:15
|
On 20.08.08 18:40:31, Andi Kleen wrote: > From: Andi Kleen <ak...@li...> > > Newer Intel CPUs (Core1+) have support for architectural > events described in CPUID 0xA. See the IA32 SDM Vol3b.18 for details. > > The advantage of this is that it can be done without knowing about > the specific CPU, because the CPU describes by itself what > performance events are supported. This is only a fallback > because only a limited set of 6 events are supported. > This allows to do profiling on Nehalem and on Atom systems > (later not tested) > > This patch implements support for that in oprofile's Intel > Family 6 profiling module. It also has the advantage of supporting > an arbitary number of events now as reported by the CPU. > Also allow arbitary counter widths >32bit while we're at it. > > Requires a patched oprofile userland to support the new > architecture. > > Signed-off-by: Andi Kleen <ak...@li...> > --- > Documentation/kernel-parameters.txt | 5 ++ > arch/x86/oprofile/nmi_int.c | 32 +++++++++-- > arch/x86/oprofile/op_model_ppro.c | 104 +++++++++++++++++++++++++++------- > arch/x86/oprofile/op_x86_model.h | 3 + > 4 files changed, 116 insertions(+), 28 deletions(-) > > diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt > index 056742c..10c8b1b 100644 > --- a/Documentation/kernel-parameters.txt > +++ b/Documentation/kernel-parameters.txt > @@ -1486,6 +1486,11 @@ and is between 256 and 4096 characters. It is defined in the file > oprofile.timer= [HW] > Use timer interrupt instead of performance counters > > + oprofile.force_arch_perfmon=1 [X86] > + Force use of architectural perfmon performance counters > + in oprofile on Intel CPUs. The kernel selects the > + correct default on its own. > + Could you create a separate patch that introduces this new kernel parameter? This would make it easier to send all other changes upstream. We already discussed the need of this parameter. Maybe it would fit better to have a more generalized paramater for this that could be reused then by other archs/models as well. Something like force_pmu_detection that could be used for all new CPUs (also other models) that do not yet have a specific kernel implementation. Even better would a sysfs entry instead with that we can specify which cpu type to use: echo "i386/arch_perfmon" > /sys/module/oprofile/parameters/cpu_type That would allow us to switch the pmu at runtime and also from the userland. > osst= [HW,SCSI] SCSI Tape Driver > Format: <buffer_size>,<write_threshold> > See also Documentation/scsi/st.txt. > diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c > index 36d2f92..6438c32 100644 > --- a/arch/x86/oprofile/nmi_int.c > +++ b/arch/x86/oprofile/nmi_int.c > @@ -430,6 +430,19 @@ static int __init ppro_init(char **cpu_type) > return 1; > } > > +static int force_arch_perfmon; > +module_param(force_arch_perfmon, int, 0); > + > +static int __init arch_perfmon_init(char **cpu_type) > +{ > + if (!cpu_has_arch_perfmon) > + return 0; > + *cpu_type = "i386/arch_perfmon"; > + model = &op_arch_perfmon_spec; > + arch_perfmon_setup_counters(); > + return 1; > +} > + > /* in order to get sysfs right */ > static int using_nmi; > > @@ -437,7 +450,7 @@ int __init op_nmi_init(struct oprofile_operations *ops) > { > __u8 vendor = boot_cpu_data.x86_vendor; > __u8 family = boot_cpu_data.x86; > - char *cpu_type; > + char *cpu_type = NULL; > > if (!cpu_has_apic) > return -ENODEV; > @@ -467,22 +480,29 @@ int __init op_nmi_init(struct oprofile_operations *ops) > break; > > case X86_VENDOR_INTEL: > + if (force_arch_perfmon) { > + if (!arch_perfmon_init(&cpu_type)) > + return -ENODEV; > + break; > + } > + > switch (family) { > /* Pentium IV */ > case 0xf: > - if (!p4_init(&cpu_type)) > - return -ENODEV; > + p4_init(&cpu_type); > break; > > /* A P6-class processor */ > case 6: > - if (!ppro_init(&cpu_type)) > - return -ENODEV; > + ppro_init(&cpu_type); > break; > > default: > - return -ENODEV; > + break; > } > + > + if (!cpu_type && !arch_perfmon_init(&cpu_type)) > + return -ENODEV; > break; > > default: > diff --git a/arch/x86/oprofile/op_model_ppro.c b/arch/x86/oprofile/op_model_ppro.c > index eff431f..12e207a 100644 > --- a/arch/x86/oprofile/op_model_ppro.c > +++ b/arch/x86/oprofile/op_model_ppro.c > @@ -1,32 +1,34 @@ > /* > * @file op_model_ppro.h > - * pentium pro / P6 model-specific MSR operations > + * Family 6 perfmon and architectural perfmon MSR operations > * > * @remark Copyright 2002 OProfile authors > + * @remark Copyright 2008 Intel Corporation > * @remark Read the file COPYING > * > * @author John Levon > * @author Philippe Elie > * @author Graydon Hoare > + * @author Andi Kleen > */ > > #include <linux/oprofile.h> > +#include <linux/slab.h> > #include <asm/ptrace.h> > #include <asm/msr.h> > #include <asm/apic.h> > #include <asm/nmi.h> > +#include <asm/intel_arch_perfmon.h> > > #include "op_x86_model.h" > #include "op_counter.h" > > -#define NUM_COUNTERS 2 > -#define NUM_CONTROLS 2 > +static int num_counters = 2; > +static int counter_width = 32; > > #define CTR_IS_RESERVED(msrs, c) (msrs->counters[(c)].addr ? 1 : 0) > #define CTR_READ(l, h, msrs, c) do {rdmsr(msrs->counters[(c)].addr, (l), (h)); } while (0) > -#define CTR_32BIT_WRITE(l, msrs, c) \ > - do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), 0); } while (0) > -#define CTR_OVERFLOWED(n) (!((n) & (1U<<31))) > +#define CTR_OVERFLOWED(n) (!((n) & (1U<<(counter_width-1)))) > > #define CTRL_IS_RESERVED(msrs, c) (msrs->controls[(c)].addr ? 1 : 0) > #define CTRL_READ(l, h, msrs, c) do {rdmsr((msrs->controls[(c)].addr), (l), (h)); } while (0) > @@ -40,20 +42,20 @@ > #define CTRL_SET_UM(val, m) (val |= (m << 8)) > #define CTRL_SET_EVENT(val, e) (val |= e) > > -static unsigned long reset_value[NUM_COUNTERS]; > +static u64 *reset_value; > > static void ppro_fill_in_addresses(struct op_msrs * const msrs) > { > int i; > > - for (i = 0; i < NUM_COUNTERS; i++) { > + for (i = 0; i < num_counters; i++) { > if (reserve_perfctr_nmi(MSR_P6_PERFCTR0 + i)) > msrs->counters[i].addr = MSR_P6_PERFCTR0 + i; > else > msrs->counters[i].addr = 0; > } > > - for (i = 0; i < NUM_CONTROLS; i++) { > + for (i = 0; i < num_counters; i++) { > if (reserve_evntsel_nmi(MSR_P6_EVNTSEL0 + i)) > msrs->controls[i].addr = MSR_P6_EVNTSEL0 + i; > else > @@ -67,8 +69,22 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs) > unsigned int low, high; > int i; > > + if (!reset_value) { > + reset_value = kmalloc(sizeof(unsigned) * num_counters, > + GFP_ATOMIC); > + if (!reset_value) > + return; > + } > + > + if (cpu_has_arch_perfmon) { > + union cpuid10_eax eax; > + eax.full = cpuid_eax(0xa); > + if (counter_width < eax.split.bit_width) > + counter_width = eax.split.bit_width; > + } > + > /* clear all counters */ > - for (i = 0 ; i < NUM_CONTROLS; ++i) { > + for (i = 0 ; i < num_counters; ++i) { > if (unlikely(!CTRL_IS_RESERVED(msrs, i))) > continue; > CTRL_READ(low, high, msrs, i); > @@ -77,18 +93,18 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs) > } > > /* avoid a false detection of ctr overflows in NMI handler */ > - for (i = 0; i < NUM_COUNTERS; ++i) { > + for (i = 0; i < num_counters; ++i) { > if (unlikely(!CTR_IS_RESERVED(msrs, i))) > continue; > - CTR_32BIT_WRITE(1, msrs, i); > + wrmsrl(msrs->counters[i].addr, -1LL); > } > > /* enable active counters */ > - for (i = 0; i < NUM_COUNTERS; ++i) { > + for (i = 0; i < num_counters; ++i) { > if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs, i))) { > reset_value[i] = counter_config[i].count; > > - CTR_32BIT_WRITE(counter_config[i].count, msrs, i); > + wrmsrl(msrs->counters[i].addr, -reset_value[i]); > > CTRL_READ(low, high, msrs, i); > CTRL_CLEAR(low); > @@ -111,13 +127,13 @@ static int ppro_check_ctrs(struct pt_regs * const regs, > unsigned int low, high; > int i; > > - for (i = 0 ; i < NUM_COUNTERS; ++i) { > + for (i = 0 ; i < num_counters; ++i) { > if (!reset_value[i]) > continue; > CTR_READ(low, high, msrs, i); > if (CTR_OVERFLOWED(low)) { > oprofile_add_sample(regs, i); > - CTR_32BIT_WRITE(reset_value[i], msrs, i); > + wrmsrl(msrs->counters[i].addr, -reset_value[i]); > } > } > > @@ -141,7 +157,7 @@ static void ppro_start(struct op_msrs const * const msrs) > unsigned int low, high; > int i; > > - for (i = 0; i < NUM_COUNTERS; ++i) { > + for (i = 0; i < num_counters; ++i) { > if (reset_value[i]) { > CTRL_READ(low, high, msrs, i); > CTRL_SET_ACTIVE(low); > @@ -156,7 +172,7 @@ static void ppro_stop(struct op_msrs const * const msrs) > unsigned int low, high; > int i; > > - for (i = 0; i < NUM_COUNTERS; ++i) { > + for (i = 0; i < num_counters; ++i) { > if (!reset_value[i]) > continue; > CTRL_READ(low, high, msrs, i); > @@ -169,21 +185,65 @@ static void ppro_shutdown(struct op_msrs const * const msrs) > { > int i; > > - for (i = 0 ; i < NUM_COUNTERS ; ++i) { > + for (i = 0 ; i < num_counters ; ++i) { > if (CTR_IS_RESERVED(msrs, i)) > release_perfctr_nmi(MSR_P6_PERFCTR0 + i); > } > - for (i = 0 ; i < NUM_CONTROLS ; ++i) { > + for (i = 0 ; i < num_counters ; ++i) { > if (CTRL_IS_RESERVED(msrs, i)) > release_evntsel_nmi(MSR_P6_EVNTSEL0 + i); > } > + if (reset_value) { > + kfree(reset_value); > + reset_value = NULL; > + } > } > > > struct op_x86_model_spec const op_ppro_spec = { > - .num_counters = NUM_COUNTERS, > - .num_controls = NUM_CONTROLS, > + .num_counters = 2, > + .num_controls = 2, > + .fill_in_addresses = &ppro_fill_in_addresses, > + .setup_ctrs = &ppro_setup_ctrs, > + .check_ctrs = &ppro_check_ctrs, > + .start = &ppro_start, > + .stop = &ppro_stop, > + .shutdown = &ppro_shutdown > +}; > + > +/* > + * Architectural performance monitoring. > + * > + * Newer Intel CPUs (Core1+) have support for architectural > + * events described in CPUID 0xA. See the IA32 SDM Vol3b.18 for details. > + * The advantage of this is that it can be done without knowing about > + * the specific CPU. > + */ > + > +void arch_perfmon_setup_counters(void) > +{ > + union cpuid10_eax eax; > + > + eax.full = cpuid_eax(0xa); > + > + /* Workaround for BIOS bugs in 6/15. Taken from perfmon2 */ > + if (eax.split.version_id == 0 && current_cpu_data.x86 == 6 && > + current_cpu_data.x86_model == 15) { > + eax.split.version_id = 2; > + eax.split.num_counters = 2; > + eax.split.bit_width = 40; > + } > + > + num_counters = eax.split.num_counters; > + > + op_arch_perfmon_spec.num_counters = num_counters; > + op_arch_perfmon_spec.num_controls = num_counters; > +} > + > +struct op_x86_model_spec op_arch_perfmon_spec = { > + /* num_counters/num_controls filled in at runtime */ > .fill_in_addresses = &ppro_fill_in_addresses, > + /* user space does the cpuid check for available events */ > .setup_ctrs = &ppro_setup_ctrs, > .check_ctrs = &ppro_check_ctrs, > .start = &ppro_start, > diff --git a/arch/x86/oprofile/op_x86_model.h b/arch/x86/oprofile/op_x86_model.h > index 575e08e..68c2bb9 100644 > --- a/arch/x86/oprofile/op_x86_model.h > +++ b/arch/x86/oprofile/op_x86_model.h > @@ -47,5 +47,8 @@ extern struct op_x86_model_spec const op_ppro_spec; > extern struct op_x86_model_spec const op_p4_spec; > extern struct op_x86_model_spec const op_p4_ht2_spec; > extern struct op_x86_model_spec const op_athlon_spec; > +extern struct op_x86_model_spec op_arch_perfmon_spec; > + > +extern void arch_perfmon_setup_counters(void); Put this to an init function of op_x86_model_spec. Then it could be also static. -Robert > > #endif /* OP_X86_MODEL_H */ > -- > 1.5.6 > > -- Advanced Micro Devices, Inc. Operating System Research Center email: rob...@am... |
From: Andi K. <ak...@li...> - 2008-09-26 01:13:12
|
Robert Richter wrote: > On 20.08.08 18:40:31, Andi Kleen wrote: >> From: Andi Kleen <ak...@li...> >> >> Newer Intel CPUs (Core1+) have support for architectural >> events described in CPUID 0xA. See the IA32 SDM Vol3b.18 for details. >> >> The advantage of this is that it can be done without knowing about >> the specific CPU, because the CPU describes by itself what >> performance events are supported. This is only a fallback >> because only a limited set of 6 events are supported. >> This allows to do profiling on Nehalem and on Atom systems >> (later not tested) >> >> This patch implements support for that in oprofile's Intel >> Family 6 profiling module. It also has the advantage of supporting >> an arbitary number of events now as reported by the CPU. >> Also allow arbitary counter widths >32bit while we're at it. >> >> Requires a patched oprofile userland to support the new >> architecture. >> >> Signed-off-by: Andi Kleen <ak...@li...> >> --- >> Documentation/kernel-parameters.txt | 5 ++ >> arch/x86/oprofile/nmi_int.c | 32 +++++++++-- >> arch/x86/oprofile/op_model_ppro.c | 104 +++++++++++++++++++++++++++------- >> arch/x86/oprofile/op_x86_model.h | 3 + >> 4 files changed, 116 insertions(+), 28 deletions(-) >> >> diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt >> index 056742c..10c8b1b 100644 >> --- a/Documentation/kernel-parameters.txt >> +++ b/Documentation/kernel-parameters.txt >> @@ -1486,6 +1486,11 @@ and is between 256 and 4096 characters. It is defined in the file >> oprofile.timer= [HW] >> Use timer interrupt instead of performance counters >> >> + oprofile.force_arch_perfmon=1 [X86] >> + Force use of architectural perfmon performance counters >> + in oprofile on Intel CPUs. The kernel selects the >> + correct default on its own. >> + > > Could you create a separate patch that introduces this new kernel > parameter? The parameter only makes sense together with something which uses it. So an additional one liner patch ( + docs) would be a patch depending on the earlier arch perfmon patch. If you want that really I can do it, but frankly it doesn't make sense to me. It's only really a debugging feature, I can also just take it out if it's a problem. > This would make it easier to send all other changes > upstream. We already discussed the need of this parameter. I thought the result of the discussion was that it was not useful because there's no equivalent on arch perfmon on any other x86 CPUs? IBS is still not architectural, but family/model specific. > Maybe it > would fit better to have a more generalized paramater for this that > could be reused then by other archs/models as well. Something like > force_pmu_detection that could be used for all new CPUs (also other > models) that do not yet have a specific kernel implementation. You mean something like pmu=<oprofile arch string> to force use of that? > Even better would a sysfs entry instead with that we can specify which > cpu type to use: module param is already in sysfs. > echo "i386/arch_perfmon" > /sys/module/oprofile/parameters/cpu_type > > That would allow us to switch the pmu at runtime and also from the > userland. Switching at runtime would be complicated changes I think Also |
From: Robert R. <rob...@am...> - 2008-09-26 03:24:00
|
On 25.09.08 17:44:39, Andi Kleen wrote: > Robert Richter wrote: >> On 20.08.08 18:40:31, Andi Kleen wrote: >>> From: Andi Kleen <ak...@li...> >>> >>> Newer Intel CPUs (Core1+) have support for architectural >>> events described in CPUID 0xA. See the IA32 SDM Vol3b.18 for details. >>> >>> The advantage of this is that it can be done without knowing about >>> the specific CPU, because the CPU describes by itself what >>> performance events are supported. This is only a fallback >>> because only a limited set of 6 events are supported. >>> This allows to do profiling on Nehalem and on Atom systems >>> (later not tested) >>> >>> This patch implements support for that in oprofile's Intel >>> Family 6 profiling module. It also has the advantage of supporting >>> an arbitary number of events now as reported by the CPU. >>> Also allow arbitary counter widths >32bit while we're at it. >>> >>> Requires a patched oprofile userland to support the new >>> architecture. >>> >>> Signed-off-by: Andi Kleen <ak...@li...> >>> --- >>> Documentation/kernel-parameters.txt | 5 ++ >>> arch/x86/oprofile/nmi_int.c | 32 +++++++++-- >>> arch/x86/oprofile/op_model_ppro.c | 104 >>> +++++++++++++++++++++++++++------- >>> arch/x86/oprofile/op_x86_model.h | 3 + >>> 4 files changed, 116 insertions(+), 28 deletions(-) >>> >>> diff --git a/Documentation/kernel-parameters.txt >>> b/Documentation/kernel-parameters.txt >>> index 056742c..10c8b1b 100644 >>> --- a/Documentation/kernel-parameters.txt >>> +++ b/Documentation/kernel-parameters.txt >>> @@ -1486,6 +1486,11 @@ and is between 256 and 4096 characters. It is >>> defined in the file >>> oprofile.timer= [HW] >>> Use timer interrupt instead of performance counters >>> + oprofile.force_arch_perfmon=1 [X86] >>> + Force use of architectural perfmon performance counters >>> + in oprofile on Intel CPUs. The kernel selects the >>> + correct default on its own. >>> + >> Could you create a separate patch that introduces this new kernel >> parameter? > > The parameter only makes sense together with something which uses it. > So an additional one liner patch ( + docs) would be a patch depending on > the earlier arch perfmon patch. If you want that really I can do it, but > frankly it doesn't make sense to me. > > It's only really a debugging feature, I can also just take it out > if it's a problem. > >> This would make it easier to send all other changes >> upstream. We already discussed the need of this parameter. > > I thought the result of the discussion was that it was not useful > because there's no equivalent on arch perfmon on any other x86 CPUs? > IBS is still not architectural, but family/model specific. > >> Maybe it >> would fit better to have a more generalized paramater for this that >> could be reused then by other archs/models as well. Something like >> force_pmu_detection that could be used for all new CPUs (also other >> models) that do not yet have a specific kernel implementation. > > You mean something like pmu=<oprofile arch string> to force > use of that? I think this would be the best solution, providing a parameter oprofile.force_pmu=<oprofile arch string> This can easily be implemented and also reused by others. I would be fine with this solution. No separate patch needed then. > >> Even better would a sysfs entry instead with that we can specify which >> cpu type to use: > > module param is already in sysfs. > >> echo "i386/arch_perfmon" > /sys/module/oprofile/parameters/cpu_type >> That would allow us to switch the pmu at runtime and also from the >> userland. > > Switching at runtime would be complicated changes I think Right, this is overhead nobody will use. -Robert -- Advanced Micro Devices, Inc. Operating System Research Center email: rob...@am... |
From: Andi K. <ak...@li...> - 2008-09-26 23:37:29
|
Robert Richter wrote: > I think this would be the best solution, providing a parameter > > oprofile.force_pmu=<oprofile arch string> > > This can easily be implemented and also reused by others. I would be > fine with this solution. No separate patch needed then. Ok I can implement that, but it'll be a separate patch. Might be until next week that I can work on it though. -Andi |
From: Robert R. <rob...@am...> - 2008-09-28 07:34:33
|
On 26.09.08 16:09:02, Andi Kleen wrote: > Robert Richter wrote: > >> I think this would be the best solution, providing a parameter >> oprofile.force_pmu=<oprofile arch string> >> This can easily be implemented and also reused by others. I would be >> fine with this solution. No separate patch needed then. > > Ok I can implement that, but it'll be a separate patch. Might be until > next week that I can work on it though. Thanks Andi, -Robert > > -Andi > > -- Advanced Micro Devices, Inc. Operating System Research Center email: rob...@am... |