* [PATCH 0/3] Support auto counter reload
@ 2024-09-30 15:41 kan.liang
2024-09-30 15:41 ` [PATCH 1/3] perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF kan.liang
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: kan.liang @ 2024-09-30 15:41 UTC (permalink / raw)
To: peterz, mingo, acme, namhyung, irogers, adrian.hunter,
linux-kernel
Cc: eranian, ak, thomas.falcon, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The relative rates among two or more events are useful for performance
analysis, e.g., a high branch miss rate may indicate a performance
issue. Usually, the samples with a relative rate that exceeds some
threshold are more useful. However, the traditional sampling takes
samples of events separately. To get the relative rates among two or
more events, a high sample rate is required, which can bring high
overhead. Many samples taken in the non-hotspot area are also dropped
(useless) in the post-process.
Auto Counter Reload (ACR) provides a means for software to specify that,
for each supported counter, the hardware should automatically reload the
counter to a specified initial value upon overflow of chosen counters.
This mechanism enables software to sample based on the relative rate of
two (or more) events, such that a sample (PMI or PEBS) is taken only if
the rate of one event exceeds some threshold relative to the rate of
another event. Taking a PMI or PEBS only when the relative rate of
perfmon events crosses a threshold can have significantly less
performance overhead than other techniques.
The details can be found at Intel Architecture Instruction Set
Extensions and Future Features (053) 8.7 AUTO COUNTER RELOAD.
Examples:
Here is the snippet of the mispredict.c. Since the array has random
numbers, jumps are random and often mispredicted.
The mispredicted rate depends on the compared value.
For the Loop1, ~11% of all branches are mispredicted.
For the Loop2, ~21% of all branches are mispredicted.
main()
{
...
for (i = 0; i < N; i++)
data[i] = rand() % 256;
...
/* Loop 1 */
for (k = 0; k < 50; k++)
for (i = 0; i < N; i++)
if (data[i] >= 64)
sum += data[i];
...
...
/* Loop 2 */
for (k = 0; k < 50; k++)
for (i = 0; i < N; i++)
if (data[i] >= 128)
sum += data[i];
...
}
Usually, a code with a high branch miss rate means a bad performance.
To understand the branch miss rate of the codes, the traditional method
usually sample both branches and branch-misses events. E.g.,
perf record -e "{cpu_atom/branch-misses/ppu, cpu_atom/branch-instructions/u}"
-c 1000000 -- ./mispredict
[ perf record: Woken up 4 times to write data ]
[ perf record: Captured and wrote 0.925 MB perf.data (5106 samples) ]
The 5106 samples are from both events and spread in both Loops.
In the post process stage, a user can know that the Loop 2 has a 21%
branch miss rate. Then they can focus on the samples of branch-misses
events for the Loop 2.
With this patch, the user can generate the samples only when the branch
miss rate > 20%.
perf record -e "{cpu_atom/branch-misses,period=200000,acr_mask=0x2/ppu,
cpu_atom/branch-instructions,period=1000000,acr_mask=0x3/u}"
-- ./mispredict
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.098 MB perf.data (2498 samples) ]
$perf report
Percent │154: movl $0x0,-0x14(%rbp)
│ ↓ jmp 1af
│ for (i = j; i < N; i++)
│15d: mov -0x10(%rbp),%eax
│ mov %eax,-0x18(%rbp)
│ ↓ jmp 1a2
│ if (data[i] >= 128)
│165: mov -0x18(%rbp),%eax
│ cltq
│ lea 0x0(,%rax,4),%rdx
│ mov -0x8(%rbp),%rax
│ add %rdx,%rax
│ mov (%rax),%eax
│ ┌──cmp $0x7f,%eax
100.00 0.00 │ ├──jle 19e
│ │sum += data[i];
The 2498 samples are all from the branch-misses events for the Loop 2.
The number of samples and overhead is significantly reduced without
losing any information.
Kan Liang (3):
perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF
perf/x86/intel: Add the enumeration and flag for the auto counter
reload
perf/x86/intel: Support auto counter reload
arch/x86/events/intel/core.c | 252 ++++++++++++++++++++++++++++-
arch/x86/events/perf_event.h | 21 +++
arch/x86/events/perf_event_flags.h | 2 +-
arch/x86/include/asm/msr-index.h | 4 +
arch/x86/include/asm/perf_event.h | 4 +-
include/linux/perf_event.h | 2 +
6 files changed, 278 insertions(+), 7 deletions(-)
--
2.38.1
^ permalink raw reply [flat|nested] 6+ messages in thread* [PATCH 1/3] perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF 2024-09-30 15:41 [PATCH 0/3] Support auto counter reload kan.liang @ 2024-09-30 15:41 ` kan.liang 2024-09-30 15:41 ` [PATCH 2/3] perf/x86/intel: Add the enumeration and flag for the auto counter reload kan.liang 2024-09-30 15:41 ` [PATCH 3/3] perf/x86/intel: Support " kan.liang 2 siblings, 0 replies; 6+ messages in thread From: kan.liang @ 2024-09-30 15:41 UTC (permalink / raw) To: peterz, mingo, acme, namhyung, irogers, adrian.hunter, linux-kernel Cc: eranian, ak, thomas.falcon, Kan Liang, stable From: Kan Liang <kan.liang@linux.intel.com> The EAX of the CPUID Leaf 023H enumerates the mask of valid sub-leaves. To tell the availability of the sub-leaf 1 (enumerate the counter mask), perf should check the bit 1 (0x2) of EAS, rather than bit 0 (0x1). The error is not user-visible on bare metal. Because the sub-leaf 0 and the sub-leaf 1 are always available. However, it may bring issues in a virtualization environment when a VMM only enumerates the sub-leaf 0. Fixes: eb467aaac21e ("perf/x86/intel: Support Architectural PerfMon Extension leaf") Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Cc: stable@vger.kernel.org --- arch/x86/events/intel/core.c | 4 ++-- arch/x86/include/asm/perf_event.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 342f8b1a2f93..123ed1d60118 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4900,8 +4900,8 @@ static void update_pmu_cap(struct x86_hybrid_pmu *pmu) if (ebx & ARCH_PERFMON_EXT_EQ) pmu->config_mask |= ARCH_PERFMON_EVENTSEL_EQ; - if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) { - cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF, + if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF) { + cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF_BIT, &eax, &ebx, &ecx, &edx); pmu->cntr_mask64 = eax; pmu->fixed_cntr_mask64 = ebx; diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index e3b5e8e96fb3..1d4ce655aece 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -191,7 +191,7 @@ union cpuid10_edx { #define ARCH_PERFMON_EXT_UMASK2 0x1 #define ARCH_PERFMON_EXT_EQ 0x2 #define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1 -#define ARCH_PERFMON_NUM_COUNTER_LEAF 0x1 +#define ARCH_PERFMON_NUM_COUNTER_LEAF BIT(ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) /* * Intel Architectural LBR CPUID detection/enumeration details: -- 2.38.1 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/3] perf/x86/intel: Add the enumeration and flag for the auto counter reload 2024-09-30 15:41 [PATCH 0/3] Support auto counter reload kan.liang 2024-09-30 15:41 ` [PATCH 1/3] perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF kan.liang @ 2024-09-30 15:41 ` kan.liang 2024-09-30 15:41 ` [PATCH 3/3] perf/x86/intel: Support " kan.liang 2 siblings, 0 replies; 6+ messages in thread From: kan.liang @ 2024-09-30 15:41 UTC (permalink / raw) To: peterz, mingo, acme, namhyung, irogers, adrian.hunter, linux-kernel Cc: eranian, ak, thomas.falcon, Kan Liang From: Kan Liang <kan.liang@linux.intel.com> The counters that support the auto counter reload feature can be enumerated in the CPUID Leaf 0x23 sub-leaf 0x2. Add acr_cntr_mask to store the mask of counters which are reloadable. Add acr_cntr_cause_mask to store the mask of counters which can cause reload. Since the e-core and p-core may have different numbers of counters, track the masks in the struct x86_hybrid_pmu as well. The Auto Counter Reload feature requires a dynamic constraint. Add a PMU flag to allocate the constraint_list. There are many existing features which require a dynamic constraint as well. Add a PMU_FL_DYN_MASK to include the flags of all the features. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> --- arch/x86/events/intel/core.c | 17 +++++++++++++++-- arch/x86/events/perf_event.h | 12 ++++++++++++ arch/x86/include/asm/perf_event.h | 2 ++ 3 files changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 123ed1d60118..57e4dc0a7b23 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4789,7 +4789,8 @@ static struct intel_excl_cntrs *allocate_excl_cntrs(int cpu) return c; } - +#define PMU_FL_DYN_MASK (PMU_FL_EXCL_CNTRS | PMU_FL_TFA | \ + PMU_FL_BR_CNTR | PMU_FL_ACR) int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu) { cpuc->pebs_record_size = x86_pmu.pebs_record_size; @@ -4800,7 +4801,7 @@ int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu) goto err; } - if (x86_pmu.flags & (PMU_FL_EXCL_CNTRS | PMU_FL_TFA | PMU_FL_BR_CNTR)) { + if (x86_pmu.flags & PMU_FL_DYN_MASK) { size_t sz = X86_PMC_IDX_MAX * sizeof(struct event_constraint); cpuc->constraint_list = kzalloc_node(sz, GFP_KERNEL, cpu_to_node(cpu)); @@ -4907,6 +4908,18 @@ static void update_pmu_cap(struct x86_hybrid_pmu *pmu) pmu->fixed_cntr_mask64 = ebx; } + if (sub_bitmaps & ARCH_PERFMON_ACR_LEAF) { + cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_ACR_LEAF_BIT, + &eax, &ebx, &ecx, &edx); + /* The mask of the counters which can be reloaded */ + pmu->acr_cntr_mask64 = eax | ((u64)ebx << INTEL_PMC_IDX_FIXED); + + /* The mask of the counters which can cause a reload of reloadable counters */ + pmu->acr_cntr_cause_mask = ecx | ((u64)edx << INTEL_PMC_IDX_FIXED); + + x86_pmu.flags |= PMU_FL_ACR; + } + if (!intel_pmu_broken_perf_cap()) { /* Perf Metric (Bit 15) and PEBS via PT (Bit 16) are hybrid enumeration */ rdmsrl(MSR_IA32_PERF_CAPABILITIES, pmu->intel_cap.capabilities); diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 31c2771545a6..c7bde9cee699 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -719,6 +719,12 @@ struct x86_hybrid_pmu { u64 fixed_cntr_mask64; unsigned long fixed_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; }; + + union { + u64 acr_cntr_mask64; + unsigned long acr_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; + }; + u64 acr_cntr_cause_mask; struct event_constraint unconstrained; u64 hw_cache_event_ids @@ -816,6 +822,11 @@ struct x86_pmu { u64 fixed_cntr_mask64; unsigned long fixed_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; }; + union { + u64 acr_cntr_mask64; + unsigned long acr_cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)]; + }; + u64 acr_cntr_cause_mask; int cntval_bits; u64 cntval_mask; union { @@ -1060,6 +1071,7 @@ do { \ #define PMU_FL_MEM_LOADS_AUX 0x100 /* Require an auxiliary event for the complete memory info */ #define PMU_FL_RETIRE_LATENCY 0x200 /* Support Retire Latency in PEBS */ #define PMU_FL_BR_CNTR 0x400 /* Support branch counter logging */ +#define PMU_FL_ACR 0x800 /* Support auto-counter reload */ #define EVENT_VAR(_id) event_attr_##_id #define EVENT_PTR(_id) &event_attr_##_id.attr.attr diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 1d4ce655aece..814a3ca69452 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -192,6 +192,8 @@ union cpuid10_edx { #define ARCH_PERFMON_EXT_EQ 0x2 #define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1 #define ARCH_PERFMON_NUM_COUNTER_LEAF BIT(ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) +#define ARCH_PERFMON_ACR_LEAF_BIT 0x2 +#define ARCH_PERFMON_ACR_LEAF BIT(ARCH_PERFMON_ACR_LEAF_BIT) /* * Intel Architectural LBR CPUID detection/enumeration details: -- 2.38.1 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 3/3] perf/x86/intel: Support auto counter reload 2024-09-30 15:41 [PATCH 0/3] Support auto counter reload kan.liang 2024-09-30 15:41 ` [PATCH 1/3] perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF kan.liang 2024-09-30 15:41 ` [PATCH 2/3] perf/x86/intel: Add the enumeration and flag for the auto counter reload kan.liang @ 2024-09-30 15:41 ` kan.liang 2024-10-01 11:16 ` Andi Kleen 2 siblings, 1 reply; 6+ messages in thread From: kan.liang @ 2024-09-30 15:41 UTC (permalink / raw) To: peterz, mingo, acme, namhyung, irogers, adrian.hunter, linux-kernel Cc: eranian, ak, thomas.falcon, Kan Liang From: Kan Liang <kan.liang@linux.intel.com> The relative rates among two or more events are useful for performance analysis, e.g., a high branch miss rate may indicate a performance issue. Usually, the samples with a relative rate that exceeds some threshold are more useful. However, the traditional sampling takes samples of events separately. To get the relative rates among two or more events, a high sample rate is required, which can bring high overhead. Many samples taken in the non-hotspot area are also dropped (useless) in the post-process. The auto counter reload (ACR) feature takes samples when the relative rate of two or more events exceeds some threshold, which provides the fine-grained information at a low cost. To support the feature, two sets of MSRs are introduced. For a given counter IA32_PMC_GPn_CTR/IA32_PMC_FXm_CTR, bit fields in the IA32_PMC_GPn_CFG_B/IA32_PMC_FXm_CFG_B MSR indicate which counter(s) can cause a reload of that counter. The reload value is stored in the IA32_PMC_GPn_CFG_C/IA32_PMC_FXm_CFG_C. The details can be found at Intel Architecture Instruction Set Extensions and Future Features (053) 8.7 AUTO COUNTER RELOAD. The ACR event/group needs to be specially configured. Because the counter mask of an event has to be recalculated according to both the original mask and the reloadable counter mask. The new counter mask is stored into the new field, dyn_mask, in struct hw_perf_event. Also, add a new flag PERF_X86_EVENT_ACR to indicate an ACR group, which is set to the group leader. The ACR configuration MSRs are only updated in the enable_event(). The disable_event() doesn't clear the ACR CFG register. Add acr_cfg_b/acr_cfg_c in the struct cpu_hw_events to cache the MSR values. It can avoid a MSR write if the value is not changed. Expose a acr_mask to the sysfs. The perf tool can utilize the new format to configure the relation of events in the group. The bit sequence of the acr_mask follows the events enabled order of the group. The kernel will convert it to the real counter order and save the updated order into the newly added hw.config1 every time the group is scheduled. The hw.config1 is eventually to be written to the ACR config MSR (MSR_IA32_PMC_GP/FX_CFG_B) when the event is enabled. Example: Here is the snippet of the mispredict.c. Since the array has random numbers, jumps are random and often mispredicted. The mispredicted rate depends on the compared value. For the Loop1, ~11% of all branches are mispredicted. For the Loop2, ~21% of all branches are mispredicted. main() { ... for (i = 0; i < N; i++) data[i] = rand() % 256; ... /* Loop 1 */ for (k = 0; k < 50; k++) for (i = 0; i < N; i++) if (data[i] >= 64) sum += data[i]; ... ... /* Loop 2 */ for (k = 0; k < 50; k++) for (i = 0; i < N; i++) if (data[i] >= 128) sum += data[i]; ... } Usually, a code with a high branch miss rate means a bad performance. To understand the branch miss rate of the codes, the traditional method usually sample both branches and branch-misses events. E.g., perf record -e "{cpu_atom/branch-misses/ppu, cpu_atom/branch-instructions/u}" -c 1000000 -- ./mispredict [ perf record: Woken up 4 times to write data ] [ perf record: Captured and wrote 0.925 MB perf.data (5106 samples) ] The 5106 samples are from both events and spread in both Loops. In the post process stage, a user can know that the Loop 2 has a 21% branch miss rate. Then they can focus on the samples of branch-misses events for the Loop 2. With this patch, the user can generate the samples only when the branch miss rate > 20%. For example, perf record -e "{cpu_atom/branch-misses,period=200000,acr_mask=0x2/ppu, cpu_atom/branch-instructions,period=1000000,acr_mask=0x3/u}" -- ./mispredict (Two different periods are applied to branch-misses and branch-instructions. The ratio is 20%. If the branch-instructions is overflowed first, the branch miss rate < 20%. No samples should be generated. All counters should be automatically reloaded. The acr_mask is set to 0x3. If the branch-misses is overflowed first, the branch miss rate > 20%. A sample triggered by the branch-misses event should be generated. The counter of the branch-instructions should be automatically reloaded. The acr_mask is set to 0x2, since the branch-misses event is the first event in the group.) [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.098 MB perf.data (2498 samples) ] $perf report Percent │154: movl $0x0,-0x14(%rbp) │ ↓ jmp 1af │ for (i = j; i < N; i++) │15d: mov -0x10(%rbp),%eax │ mov %eax,-0x18(%rbp) │ ↓ jmp 1a2 │ if (data[i] >= 128) │165: mov -0x18(%rbp),%eax │ cltq │ lea 0x0(,%rax,4),%rdx │ mov -0x8(%rbp),%rax │ add %rdx,%rax │ mov (%rax),%eax │ ┌──cmp $0x7f,%eax 100.00 0.00 │ ├──jle 19e │ │sum += data[i]; The 2498 samples are all from the branch-misses events for the Loop 2. The number of samples and overhead is significantly reduced without losing any information. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> --- arch/x86/events/intel/core.c | 231 ++++++++++++++++++++++++++++- arch/x86/events/perf_event.h | 9 ++ arch/x86/events/perf_event_flags.h | 2 +- arch/x86/include/asm/msr-index.h | 4 + include/linux/perf_event.h | 2 + 5 files changed, 246 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 57e4dc0a7b23..dde50da46441 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2854,6 +2854,54 @@ static void intel_pmu_enable_fixed(struct perf_event *event) cpuc->fixed_ctrl_val |= bits; } +static void intel_pmu_config_acr(int idx, u64 mask, u32 reload) +{ + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); + int msr_b, msr_c; + + if (!mask && cpuc->acr_cfg_b[idx] == mask) + return; + + if (idx < INTEL_PMC_IDX_FIXED) { + msr_b = MSR_IA32_PMC_V6_GP0_CFG_B; + msr_c = MSR_IA32_PMC_V6_GP0_CFG_C; + } else { + msr_b = MSR_IA32_PMC_V6_FX0_CFG_B; + msr_c = MSR_IA32_PMC_V6_FX0_CFG_C; + idx -= INTEL_PMC_IDX_FIXED; + } + + if (cpuc->acr_cfg_b[idx] != mask) { + wrmsrl(msr_b + x86_pmu.addr_offset(idx, false), mask); + cpuc->acr_cfg_b[idx] = mask; + } + /* Only need to update the reload value when there is a valid config value. */ + if (mask && cpuc->acr_cfg_c[idx] != reload) { + wrmsrl(msr_c + x86_pmu.addr_offset(idx, false), reload); + cpuc->acr_cfg_c[idx] = reload; + } +} + +static void intel_pmu_enable_acr(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + /* The PMU doesn't support ACR */ + if (!hybrid(event->pmu, acr_cntr_mask64)) + return; + + if (!is_acr_event_group(event) || !event->attr.config2) { + /* + * The disable doesn't clear the ACR CFG register. + * Check and clear the ACR CFG register. + */ + intel_pmu_config_acr(hwc->idx, 0, 0); + return; + } + + intel_pmu_config_acr(hwc->idx, hwc->config1, -hwc->sample_period); +} + static void intel_pmu_enable_event(struct perf_event *event) { u64 enable_mask = ARCH_PERFMON_EVENTSEL_ENABLE; @@ -2869,10 +2917,12 @@ static void intel_pmu_enable_event(struct perf_event *event) enable_mask |= ARCH_PERFMON_EVENTSEL_BR_CNTR; intel_set_masks(event, idx); __x86_pmu_enable_event(hwc, enable_mask); + intel_pmu_enable_acr(event); break; case INTEL_PMC_IDX_FIXED ... INTEL_PMC_IDX_FIXED_BTS - 1: case INTEL_PMC_IDX_METRIC_BASE ... INTEL_PMC_IDX_METRIC_END: intel_pmu_enable_fixed(event); + intel_pmu_enable_acr(event); break; case INTEL_PMC_IDX_FIXED_BTS: if (!__this_cpu_read(cpu_hw_events.enabled)) @@ -3690,6 +3740,12 @@ intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx, c2->weight = hweight64(c2->idxmsk64); } + if (is_acr_event_group(event)) { + c2 = dyn_constraint(cpuc, c2, idx); + c2->idxmsk64 &= event->hw.dyn_mask; + c2->weight = hweight64(c2->idxmsk64); + } + return c2; } @@ -3948,6 +4004,78 @@ static inline bool intel_pmu_has_cap(struct perf_event *event, int idx) return test_bit(idx, (unsigned long *)&intel_cap->capabilities); } +static bool intel_pmu_is_acr_group(struct perf_event *event) +{ + if (!hybrid(event->pmu, acr_cntr_mask64)) + return false; + + /* The group leader has the ACR flag set */ + if (is_acr_event_group(event)) + return true; + + /* The acr_mask is set */ + if (event->attr.config2) + return true; + + return false; +} + +static int intel_pmu_acr_check_reloadable_event(struct perf_event *event) +{ + struct perf_event *sibling, *leader = event->group_leader; + int num = 0; + + /* + * The acr_mask(config2) indicates the event can be reloaded by + * other events. Apply the acr_cntr_mask. + */ + if (leader->attr.config2) { + leader->hw.dyn_mask = hybrid(leader->pmu, acr_cntr_mask64); + num++; + } else + leader->hw.dyn_mask = ~0ULL; + + for_each_sibling_event(sibling, leader) { + if (sibling->attr.config2) { + sibling->hw.dyn_mask = hybrid(sibling->pmu, acr_cntr_mask64); + num++; + } else + sibling->hw.dyn_mask = ~0ULL; + } + + if (event->attr.config2) { + event->hw.dyn_mask = hybrid(event->pmu, acr_cntr_mask64); + num++; + } else + event->hw.dyn_mask = ~0ULL; + + if (num > hweight64(hybrid(event->pmu, acr_cntr_mask64))) + return -EINVAL; + + return 0; +} + +/* + * Update the dyn_mask of each event to guarantee the event is scheduled + * to the counters which be able to cause a reload. + */ +static void intel_pmu_set_acr_dyn_mask(struct perf_event *event, int idx, + struct perf_event *last) +{ + struct perf_event *sibling, *leader = event->group_leader; + u64 mask = hybrid(event->pmu, acr_cntr_cause_mask); + + /* The event set in the acr_mask(config2) can causes a reload. */ + if (test_bit(idx, (unsigned long *)&leader->attr.config2)) + event->hw.dyn_mask &= mask; + for_each_sibling_event(sibling, leader) { + if (test_bit(idx, (unsigned long *)&sibling->attr.config2)) + event->hw.dyn_mask &= mask; + } + if (test_bit(idx, (unsigned long *)&last->attr.config2)) + event->hw.dyn_mask &= mask; +} + static int intel_pmu_hw_config(struct perf_event *event) { int ret = x86_pmu_hw_config(event); @@ -4055,6 +4183,41 @@ static int intel_pmu_hw_config(struct perf_event *event) event->hw.flags |= PERF_X86_EVENT_PEBS_VIA_PT; } + if (intel_pmu_is_acr_group(event)) { + struct perf_event *sibling, *leader = event->group_leader; + int event_idx = 0; + + if (is_metric_event(event)) + return -EINVAL; + + /* Not support freq mode */ + if (event->attr.freq) + return -EINVAL; + + /* PDist is not supported */ + if (event->attr.config2 && event->attr.precise_ip > 2) + return -EINVAL; + + /* + * It's hard to know whether the event is the last one of + * the group. Reconfigure the dyn_mask of each X86 event + * every time when add a new event. + * + * Check whether the reloadable counters is enough and + * initialize the dyn_mask. + */ + if (intel_pmu_acr_check_reloadable_event(event)) + return -EINVAL; + + /* Reconfigure the dyn_mask for each event */ + intel_pmu_set_acr_dyn_mask(leader, event_idx++, event); + for_each_sibling_event(sibling, leader) + intel_pmu_set_acr_dyn_mask(sibling, event_idx++, event); + intel_pmu_set_acr_dyn_mask(event, event_idx, event); + + leader->hw.flags |= PERF_X86_EVENT_ACR; + } + if ((event->attr.type == PERF_TYPE_HARDWARE) || (event->attr.type == PERF_TYPE_HW_CACHE)) return 0; @@ -4163,6 +4326,49 @@ static int intel_pmu_hw_config(struct perf_event *event) return 0; } +static void intel_pmu_update_acr_mask(struct cpu_hw_events *cpuc, int n, int *assign) +{ + struct perf_event *event; + int n0, i, off; + + if (cpuc->txn_flags & PERF_PMU_TXN_ADD) + n0 = cpuc->n_events - cpuc->n_txn; + else + n0 = cpuc->n_events; + + for (i = n0; i < n; i++) { + event = cpuc->event_list[i]; + event->hw.config1 = 0; + + /* Convert the group index into the counter index */ + for_each_set_bit(off, (unsigned long *)&event->attr.config2, n - n0) + set_bit(assign[n0 + off], (unsigned long *)&event->hw.config1); + } +} + +static int intel_pmu_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) +{ + struct perf_event *event; + int ret = x86_schedule_events(cpuc, n, assign); + + if (ret) + return ret; + + if (cpuc->is_fake) + return ret; + + event = cpuc->event_list[n - 1]; + /* + * The acr_mask(config2) is the event-enabling order. + * Update it to the counter order after the counters are assigned. + */ + if (event && is_acr_event_group(event)) + intel_pmu_update_acr_mask(cpuc, n, assign); + + return 0; +} + + /* * Currently, the only caller of this function is the atomic_switch_perf_msrs(). * The host perf context helps to prepare the values of the real hardware for @@ -5319,7 +5525,7 @@ static __initconst const struct x86_pmu intel_pmu = { .set_period = intel_pmu_set_period, .update = intel_pmu_update, .hw_config = intel_pmu_hw_config, - .schedule_events = x86_schedule_events, + .schedule_events = intel_pmu_schedule_events, .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, .perfctr = MSR_ARCH_PERFMON_PERFCTR0, .fixedctr = MSR_ARCH_PERFMON_FIXED_CTR0, @@ -5923,6 +6129,21 @@ td_is_visible(struct kobject *kobj, struct attribute *attr, int i) return attr->mode; } +PMU_FORMAT_ATTR(acr_mask, "config2:0-63"); + +static struct attribute *format_acr_attrs[] = { + &format_attr_acr_mask.attr, + NULL +}; + +static umode_t +acr_is_visible(struct kobject *kobj, struct attribute *attr, int i) +{ + struct device *dev = kobj_to_dev(kobj); + + return hybrid(dev_get_drvdata(dev), acr_cntr_mask64) ? attr->mode : 0; +} + static struct attribute_group group_events_td = { .name = "events", .is_visible = td_is_visible, @@ -5965,6 +6186,12 @@ static struct attribute_group group_format_evtsel_ext = { .is_visible = evtsel_ext_is_visible, }; +static struct attribute_group group_format_acr = { + .name = "format", + .attrs = format_acr_attrs, + .is_visible = acr_is_visible, +}; + static struct attribute_group group_default = { .attrs = intel_pmu_attrs, .is_visible = default_is_visible, @@ -5979,6 +6206,7 @@ static const struct attribute_group *attr_update[] = { &group_format_extra, &group_format_extra_skl, &group_format_evtsel_ext, + &group_format_acr, &group_default, NULL, }; @@ -6263,6 +6491,7 @@ static const struct attribute_group *hybrid_attr_update[] = { &group_caps_lbr, &hybrid_group_format_extra, &group_format_evtsel_ext, + &group_format_acr, &group_default, &hybrid_group_cpus, NULL, diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index c7bde9cee699..6617f23428ae 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -115,6 +115,11 @@ static inline bool is_branch_counters_group(struct perf_event *event) return event->group_leader->hw.flags & PERF_X86_EVENT_BRANCH_COUNTERS; } +static inline bool is_acr_event_group(struct perf_event *event) +{ + return event->group_leader->hw.flags & PERF_X86_EVENT_ACR; +} + struct amd_nb { int nb_id; /* NorthBridge id */ int refcnt; /* reference count */ @@ -281,6 +286,10 @@ struct cpu_hw_events { u64 fixed_ctrl_val; u64 active_fixed_ctrl_val; + /* Intel ACR configuration */ + u64 acr_cfg_b[X86_PMC_IDX_MAX]; + u64 acr_cfg_c[X86_PMC_IDX_MAX]; + /* * Intel LBR bits */ diff --git a/arch/x86/events/perf_event_flags.h b/arch/x86/events/perf_event_flags.h index 6c977c19f2cd..f21d00965af6 100644 --- a/arch/x86/events/perf_event_flags.h +++ b/arch/x86/events/perf_event_flags.h @@ -9,7 +9,7 @@ PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */ PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */ PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */ PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */ - /* 0x00080 */ +PERF_ARCH(ACR, 0x00080) /* Auto counter reload */ PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */ PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */ PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 82c6a4d350e0..83a86dc3f480 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -569,7 +569,11 @@ /* V6 PMON MSR range */ #define MSR_IA32_PMC_V6_GP0_CTR 0x1900 #define MSR_IA32_PMC_V6_GP0_CFG_A 0x1901 +#define MSR_IA32_PMC_V6_GP0_CFG_B 0x1902 +#define MSR_IA32_PMC_V6_GP0_CFG_C 0x1903 #define MSR_IA32_PMC_V6_FX0_CTR 0x1980 +#define MSR_IA32_PMC_V6_FX0_CFG_B 0x1982 +#define MSR_IA32_PMC_V6_FX0_CFG_C 0x1983 #define MSR_IA32_PMC_V6_STEP 4 /* KeyID partitioning between MKTME and TDX */ diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 794f66057878..5305add82df2 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -157,7 +157,9 @@ struct hw_perf_event { union { struct { /* hardware */ u64 config; + u64 config1; u64 last_tag; + u64 dyn_mask; unsigned long config_base; unsigned long event_base; int event_base_rdpmc; -- 2.38.1 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 3/3] perf/x86/intel: Support auto counter reload 2024-09-30 15:41 ` [PATCH 3/3] perf/x86/intel: Support " kan.liang @ 2024-10-01 11:16 ` Andi Kleen 2024-10-01 14:41 ` Liang, Kan 0 siblings, 1 reply; 6+ messages in thread From: Andi Kleen @ 2024-10-01 11:16 UTC (permalink / raw) To: kan.liang Cc: peterz, mingo, acme, namhyung, irogers, adrian.hunter, linux-kernel, eranian, thomas.falcon I hope the perf tools will support a nicer syntax, the mask is quite obscure. On Mon, Sep 30, 2024 at 08:41:22AM -0700, kan.liang@linux.intel.com wrote: > } > > +static void intel_pmu_config_acr(int idx, u64 mask, u32 reload) > +{ > + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > + int msr_b, msr_c; > + > + if (!mask && cpuc->acr_cfg_b[idx] == mask) > + return; if (!mask && !cpuc->acr_cfg_b[idx]) > + > + if (idx < INTEL_PMC_IDX_FIXED) { > + msr_b = MSR_IA32_PMC_V6_GP0_CFG_B; > + msr_c = MSR_IA32_PMC_V6_GP0_CFG_C; > + } else { > + msr_b = MSR_IA32_PMC_V6_FX0_CFG_B; > + msr_c = MSR_IA32_PMC_V6_FX0_CFG_C; > + idx -= INTEL_PMC_IDX_FIXED; > + } Does this handle metrics correctly? I assume you ran the fuzzer over this. > + if (cpuc->acr_cfg_b[idx] != mask) { > + wrmsrl(msr_b + x86_pmu.addr_offset(idx, false), mask); > + cpuc->acr_cfg_b[idx] = mask; > + } > + /* Only need to update the reload value when there is a valid config value. */ > + if (mask && cpuc->acr_cfg_c[idx] != reload) { > + wrmsrl(msr_c + x86_pmu.addr_offset(idx, false), reload); > + cpuc->acr_cfg_c[idx] = reload; Can reload be larger than the counter width? What happens then? > return c2; > } > > @@ -3948,6 +4004,78 @@ static inline bool intel_pmu_has_cap(struct perf_event *event, int idx) > return test_bit(idx, (unsigned long *)&intel_cap->capabilities); > } > > +static bool intel_pmu_is_acr_group(struct perf_event *event) > +{ > + if (!hybrid(event->pmu, acr_cntr_mask64)) > + return false; Shouldn't this error when the group leader has the flag set? > + /* The group leader has the ACR flag set */ > + if (is_acr_event_group(event)) > + return true; > + > + /* The acr_mask is set */ > + if (event->attr.config2) > + return true; > + * the group. Reconfigure the dyn_mask of each X86 event > + * every time when add a new event. > + * > + * Check whether the reloadable counters is enough and > + * initialize the dyn_mask. > + */ > + if (intel_pmu_acr_check_reloadable_event(event)) > + return -EINVAL; > + > + /* Reconfigure the dyn_mask for each event */ > + intel_pmu_set_acr_dyn_mask(leader, event_idx++, event); > + for_each_sibling_event(sibling, leader) > + intel_pmu_set_acr_dyn_mask(sibling, event_idx++, event); > + intel_pmu_set_acr_dyn_mask(event, event_idx, event); > + Shouldn't there be an error somewhere when a mask bit is set that exceeds the group? (maybe I missed it) I assume it could #GP on the MSR write, or maybe even overflow into some other field. -Andi ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 3/3] perf/x86/intel: Support auto counter reload 2024-10-01 11:16 ` Andi Kleen @ 2024-10-01 14:41 ` Liang, Kan 0 siblings, 0 replies; 6+ messages in thread From: Liang, Kan @ 2024-10-01 14:41 UTC (permalink / raw) To: Andi Kleen Cc: peterz, mingo, acme, namhyung, irogers, adrian.hunter, linux-kernel, eranian, thomas.falcon On 2024-10-01 7:16 a.m., Andi Kleen wrote: > > I hope the perf tools will support a nicer syntax, the mask is quite > obscure. Yes, it's a little bit hard to use, but it's workable with the current perf too. So I post the kernel patch separately. Thomas will work on improving the tool side, which will provide a new and more convenient option. > > On Mon, Sep 30, 2024 at 08:41:22AM -0700, kan.liang@linux.intel.com wrote: >> } >> >> +static void intel_pmu_config_acr(int idx, u64 mask, u32 reload) >> +{ >> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); >> + int msr_b, msr_c; >> + >> + if (!mask && cpuc->acr_cfg_b[idx] == mask) >> + return; > > > if (!mask && !cpuc->acr_cfg_b[idx]) > Sure >> + >> + if (idx < INTEL_PMC_IDX_FIXED) { >> + msr_b = MSR_IA32_PMC_V6_GP0_CFG_B; >> + msr_c = MSR_IA32_PMC_V6_GP0_CFG_C; >> + } else { >> + msr_b = MSR_IA32_PMC_V6_FX0_CFG_B; >> + msr_c = MSR_IA32_PMC_V6_FX0_CFG_C; >> + idx -= INTEL_PMC_IDX_FIXED; >> + } > > Does this handle metrics correctly? > You mean perf metric? The perf errors out if a perf metric event is detected. > I assume you ran the fuzzer over this. > >> + if (cpuc->acr_cfg_b[idx] != mask) { >> + wrmsrl(msr_b + x86_pmu.addr_offset(idx, false), mask); >> + cpuc->acr_cfg_b[idx] = mask; >> + } >> + /* Only need to update the reload value when there is a valid config value. */ >> + if (mask && cpuc->acr_cfg_c[idx] != reload) { >> + wrmsrl(msr_c + x86_pmu.addr_offset(idx, false), reload); >> + cpuc->acr_cfg_c[idx] = reload; > > Can reload be larger than the counter width? What happens then? I will add a check in the hw_config() to make sure that the period is less than the counter width for the auto-reload case. > >> return c2; >> } >> >> @@ -3948,6 +4004,78 @@ static inline bool intel_pmu_has_cap(struct perf_event *event, int idx) >> return test_bit(idx, (unsigned long *)&intel_cap->capabilities); >> } >> >> +static bool intel_pmu_is_acr_group(struct perf_event *event) >> +{ >> + if (!hybrid(event->pmu, acr_cntr_mask64)) >> + return false; > > Shouldn't this error when the group leader > has the flag set? Only when both config2 and acr_cntr_mask64 have values, the group leader has the flag set by the kernel. The case cannot happen, when !acr_cntr_mask64, but a group leader has the flag set. > >> + /* The group leader has the ACR flag set */ >> + if (is_acr_event_group(event)) >> + return true; >> + >> + /* The acr_mask is set */ >> + if (event->attr.config2) >> + return true; > >> + * the group. Reconfigure the dyn_mask of each X86 event >> + * every time when add a new event. >> + * >> + * Check whether the reloadable counters is enough and >> + * initialize the dyn_mask. >> + */ >> + if (intel_pmu_acr_check_reloadable_event(event)) >> + return -EINVAL; >> + >> + /* Reconfigure the dyn_mask for each event */ >> + intel_pmu_set_acr_dyn_mask(leader, event_idx++, event); >> + for_each_sibling_event(sibling, leader) >> + intel_pmu_set_acr_dyn_mask(sibling, event_idx++, event); >> + intel_pmu_set_acr_dyn_mask(event, event_idx, event); >> + > > Shouldn't there be an error somewhere when a mask bit is set that > exceeds the group? (maybe I missed it) We have no idea how big the whole group is at that moment. We don't know if the current event is the last one in a group. Even if the mask (config2) exceeds the group, the invalid part will be ignored. It should be harmless. > > I assume it could #GP on the MSR write, or maybe even overflow into > some other field. The mask (config2) set by the user cannot be directly written to the MSRs. They are used to find the reloadable mask and caused reload mask, which are from the enumeration. It guarantees that only the supported MSRs/counters will be accessed. The mask (config2) is also used in the intel_pmu_update_acr_mask() which is after the scheduler. The n - n0 guarantees that only the bits in the group is converted. The invalid part of the mask (config2) is ignored. + /* Convert the group index into the counter index */ + for_each_set_bit(off, (unsigned long *)&event->attr.config2, n - n0) + set_bit(assign[n0 + off], (unsigned long *)&event->hw.config1); Thanks, Kan ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-10-01 14:41 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-09-30 15:41 [PATCH 0/3] Support auto counter reload kan.liang 2024-09-30 15:41 ` [PATCH 1/3] perf/x86/intel: Fix ARCH_PERFMON_NUM_COUNTER_LEAF kan.liang 2024-09-30 15:41 ` [PATCH 2/3] perf/x86/intel: Add the enumeration and flag for the auto counter reload kan.liang 2024-09-30 15:41 ` [PATCH 3/3] perf/x86/intel: Support " kan.liang 2024-10-01 11:16 ` Andi Kleen 2024-10-01 14:41 ` Liang, Kan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox