* [Patch v3 0/4] perf/x86/intel: Fix bugs of auto counter reload sampling
@ 2026-04-27 8:55 Dapeng Mi
2026-04-27 8:55 ` [Patch v3 1/4] perf/x86/intel: Improve validation and configuration of ACR masks Dapeng Mi
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Dapeng Mi @ 2026-04-27 8:55 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Changes:
v2 -> v3:
- Fix the issue that user-space ACR-mask is not properly validated.
(Sashiko, patch 1/4)
v1 -> v2:
- Clear stale mask for all events (Sashiko, Patch 1/4)
- Enable auto counter reload for DMR. (Patch 3/4)
- Remove duplicated CFG_C MSR value tracking. (Patch 4/4)
This small patch-set fixes several issues in auto counter reload support.
- The stale ACR mask is not properly validated before setting a new one.
Patch 1/4 fixes this issue.
- PMI is enabled by default for self-reloaded ACR events which causes
suspicious NMI warning. Patch 2/4 fixes this issue.
- ACR sampling is not really enabled on DMR. Patch 3/4 fixes the issue.
- Two variables are used to trace CFG_C MSR value independently for ACR
and arch-PEBS. It's error-prone and fragile. Patch 4/4 fixes this
issue.
Besides an ACR unit test is added into perf tests which would be posted
in a separate session.
Tests:
Run below ACR sampling commands on CWF, DMR, PTL and NVL (hybrid
platforms), no issues are found.
1. Non-PEBS ACR sampling
perf record -e '{instructions/period=20000,acr_mask=0x2/u,cycles/period=40000,acr_mask=0x3/u}' ~/test
2. PEBS ACR sampling
perf record -e '{instructions/period=20000,acr_mask=0x2/pu,cycles/period=40000,acr_mask=0x3/u}' ~/test
3. Perf-tools ACR sampling test
The patch (https://lore.kernel.org/all/20260420024528.2130065-1-dapeng1.mi@linux.intel.com/)
adds ACR sampling test case in perf tools record test.
perf test 148
History:
v1: https://lore.kernel.org/all/20260413010157.535990-1-dapeng1.mi@linux.intel.com/
v2: https://lore.kernel.org/all/20260420024528.2130065-1-dapeng1.mi@linux.intel.com/
Dapeng Mi (4):
perf/x86/intel: Improve validation and configuration of ACR masks
perf/x86/intel: Disable PMI for self-reloaded ACR events
perf/x86/intel: Enable auto counter reload for DMR
perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking
arch/x86/events/intel/core.c | 63 ++++++++++++++++++++++++++----------
arch/x86/events/perf_event.h | 14 ++++++--
2 files changed, 57 insertions(+), 20 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Patch v3 1/4] perf/x86/intel: Improve validation and configuration of ACR masks
2026-04-27 8:55 [Patch v3 0/4] perf/x86/intel: Fix bugs of auto counter reload sampling Dapeng Mi
@ 2026-04-27 8:55 ` Dapeng Mi
2026-04-27 8:55 ` [Patch v3 2/4] perf/x86/intel: Disable PMI for self-reloaded ACR events Dapeng Mi
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Dapeng Mi @ 2026-04-27 8:55 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi, stable
Currently there are several issues on the user space ACR mask validation
and configuration.
- The validation for user space ACR mask (attr.config2) is incomplete,
e.g., the ACR mask could include the index which belongs to another
ACR events group, but it's not validated.
- An early return on an invalid ACR mask caused all subsequent ACR groups
to be skipped.
- The stale hardware ACR mask (hw.config1) is not cleared before setting
new hardware ACR mask.
The following changes address all of the above issues.
- Calculate the indices bitmap for each ACR events group. Any bits in
the user-space mask not present in the group's bitmap are now dropped.
- Instead of an early return on invalid bits, drop only the invalid
portions and continue iterating through all ACR events to ensure full
configuration.
- Explicitly clear the stale hardware ACR mask for each event prior to
writing the new configuration.
Cc: stable@vger.kernel.org
Fixes: ec980e4facef ("perf/x86/intel: Support auto counter reload")
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
v3: new patch.
arch/x86/events/intel/core.c | 32 +++++++++++++++++++++++++-------
1 file changed, 25 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 4768236c054b..1a2c268018a2 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3332,23 +3332,41 @@ static void intel_pmu_enable_event(struct perf_event *event)
static void intel_pmu_acr_late_setup(struct cpu_hw_events *cpuc)
{
struct perf_event *event, *leader;
- int i, j, idx;
+ int i, j, k, bit, idx;
+ u64 group_mask;
for (i = 0; i < cpuc->n_events; i++) {
leader = cpuc->event_list[i];
if (!is_acr_event_group(leader))
continue;
- /* The ACR events must be contiguous. */
+ /* Find the last event of the ACR group. */
for (j = i; j < cpuc->n_events; j++) {
event = cpuc->event_list[j];
if (event->group_leader != leader->group_leader)
break;
- for_each_set_bit(idx, (unsigned long *)&event->attr.config2, X86_PMC_IDX_MAX) {
- if (i + idx >= cpuc->n_events ||
- !is_acr_event_group(cpuc->event_list[i + idx]))
- return;
- __set_bit(cpuc->assign[i + idx], (unsigned long *)&event->hw.config1);
+ }
+
+ /* Figure out the group indices bitmap. */
+ group_mask = 0;
+ for (k = i; k < j; k++)
+ group_mask |= BIT_ULL(cpuc->assign[k]);
+
+ /*
+ * Translate the user-space ACR mask (attr.config2) into the physical
+ * counter bitmask (hw.config1) for each ACR event in the group.
+ * NOTE: ACR event contiguity is guaranteed by intel_pmu_hw_config().
+ */
+ for (k = i; k < j; k++) {
+ event = cpuc->event_list[k];
+ event->hw.config1 = 0;
+ for_each_set_bit(bit, (unsigned long *)&event->attr.config2, X86_PMC_IDX_MAX) {
+ idx = i + bit;
+ if (idx >= cpuc->n_events ||
+ !(BIT_ULL(cpuc->assign[idx]) & group_mask) ||
+ !is_acr_event_group(cpuc->event_list[idx]))
+ continue;
+ __set_bit(cpuc->assign[idx], (unsigned long *)&event->hw.config1);
}
}
i = j - 1;
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Patch v3 2/4] perf/x86/intel: Disable PMI for self-reloaded ACR events
2026-04-27 8:55 [Patch v3 0/4] perf/x86/intel: Fix bugs of auto counter reload sampling Dapeng Mi
2026-04-27 8:55 ` [Patch v3 1/4] perf/x86/intel: Improve validation and configuration of ACR masks Dapeng Mi
@ 2026-04-27 8:55 ` Dapeng Mi
2026-04-27 8:55 ` [Patch v3 3/4] perf/x86/intel: Enable auto counter reload for DMR Dapeng Mi
2026-04-27 8:55 ` [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking Dapeng Mi
3 siblings, 0 replies; 7+ messages in thread
From: Dapeng Mi @ 2026-04-27 8:55 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi, stable
On platforms with Auto Counter Reload (ACR) support, such as NVL, a
"NMI received for unknown reason 30" warning is observed when running
multiple events in a group with ACR enabled:
$ perf record -e '{instructions/period=20000,acr_mask=0x2/u,\
cycles/period=40000,acr_mask=0x3/u}' ./test
The warning occurs because the Performance Monitoring Interrupt (PMI)
is enabled for the self-reloaded event (the cycles event in this case).
According to the Intel SDM, the overflow bit
(IA32_PERF_GLOBAL_STATUS.PMCn_OVF) is never set for self-reloaded events.
Since the bit is not set, the perf NMI handler cannot identify the source
of the interrupt, leading to the "unknown reason" message.
Furthermore, enabling PMI for self-reloaded events is unnecessary and
can lead to extraneous records that pollute the user's requested data.
Disable the interrupt bit for all events configured with ACR self-reload.
Reported-by: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Fixes: ec980e4facef ("perf/x86/intel: Support auto counter reload")
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 17 +++++++++++++----
arch/x86/events/perf_event.h | 10 ++++++++++
2 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 1a2c268018a2..c1841fa89908 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3118,11 +3118,11 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
intel_set_masks(event, idx);
/*
- * Enable IRQ generation (0x8), if not PEBS,
- * and enable ring-3 counting (0x2) and ring-0 counting (0x1)
- * if requested:
+ * Enable IRQ generation (0x8), if not PEBS and self-reloaded
+ * ACR event, and enable ring-3 counting (0x2) and ring-0
+ * counting (0x1) if requested:
*/
- if (!event->attr.precise_ip)
+ if (!event->attr.precise_ip && !is_acr_self_reload_event(event))
bits |= INTEL_FIXED_0_ENABLE_PMI;
if (hwc->config & ARCH_PERFMON_EVENTSEL_USR)
bits |= INTEL_FIXED_0_USER;
@@ -3306,6 +3306,15 @@ static void intel_pmu_enable_event(struct perf_event *event)
intel_set_masks(event, idx);
static_call_cond(intel_pmu_enable_acr_event)(event);
static_call_cond(intel_pmu_enable_event_ext)(event);
+ /*
+ * For self-reloaded ACR event, don't enable PMI since
+ * HW won't set overflow bit in GLOBAL_STATUS. Otherwise,
+ * the PMI would be recognized as a suspicious NMI.
+ */
+ if (is_acr_self_reload_event(event))
+ hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT;
+ else if (!event->attr.precise_ip)
+ hwc->config |= ARCH_PERFMON_EVENTSEL_INT;
__x86_pmu_enable_event(hwc, enable_mask);
break;
case INTEL_PMC_IDX_FIXED ... INTEL_PMC_IDX_FIXED_BTS - 1:
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad87d3c8b2c..524668dcf4cc 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -137,6 +137,16 @@ static inline bool is_acr_event_group(struct perf_event *event)
return check_leader_group(event->group_leader, PERF_X86_EVENT_ACR);
}
+static inline bool is_acr_self_reload_event(struct perf_event *event)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (hwc->idx < 0)
+ return false;
+
+ return test_bit(hwc->idx, (unsigned long *)&hwc->config1);
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Patch v3 3/4] perf/x86/intel: Enable auto counter reload for DMR
2026-04-27 8:55 [Patch v3 0/4] perf/x86/intel: Fix bugs of auto counter reload sampling Dapeng Mi
2026-04-27 8:55 ` [Patch v3 1/4] perf/x86/intel: Improve validation and configuration of ACR masks Dapeng Mi
2026-04-27 8:55 ` [Patch v3 2/4] perf/x86/intel: Disable PMI for self-reloaded ACR events Dapeng Mi
@ 2026-04-27 8:55 ` Dapeng Mi
2026-04-27 8:55 ` [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking Dapeng Mi
3 siblings, 0 replies; 7+ messages in thread
From: Dapeng Mi @ 2026-04-27 8:55 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi, stable
Panther cove µarch starts to support auto counter reload (ACR), but the
static_call intel_pmu_enable_acr_event() is not updated for the Panther
Cove µarch used by DMR. It leads to the auto counter reload is not
really enabled on DMR.
Update static_call intel_pmu_enable_acr_event() in intel_pmu_init_pnc().
Cc: stable@vger.kernel.org
Fixes: d345b6bb8860 ("perf/x86/intel: Add core PMU support for DMR")
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index c1841fa89908..60568a9ce06b 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -7518,6 +7518,7 @@ static __always_inline void intel_pmu_init_pnc(struct pmu *pmu)
hybrid(pmu, event_constraints) = intel_pnc_event_constraints;
hybrid(pmu, pebs_constraints) = intel_pnc_pebs_event_constraints;
hybrid(pmu, extra_regs) = intel_pnc_extra_regs;
+ static_call_update(intel_pmu_enable_acr_event, intel_pmu_enable_acr);
}
static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking
2026-04-27 8:55 [Patch v3 0/4] perf/x86/intel: Fix bugs of auto counter reload sampling Dapeng Mi
` (2 preceding siblings ...)
2026-04-27 8:55 ` [Patch v3 3/4] perf/x86/intel: Enable auto counter reload for DMR Dapeng Mi
@ 2026-04-27 8:55 ` Dapeng Mi
2026-04-28 13:00 ` Peter Zijlstra
3 siblings, 1 reply; 7+ messages in thread
From: Dapeng Mi @ 2026-04-27 8:55 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Both Auto Counter Reload (ACR) and Architectural PEBS use the PERF_CFG_C
MSRs to configure event behavior. Currently, the driver maintains two
independent variables acr_cfg_c and cfg_c_val to cache the values intended
for these MSRs.
Using separate variables to track a single hardware register state is
error-prone and can lead to configuration conflicts. Consolidate the
tracking into a single cfg_c_val variable to ensure a unified and
consistent view of the PERF_CFG_C MSR state.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 13 +++++++------
arch/x86/events/perf_event.h | 4 +---
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 60568a9ce06b..013e6e02706d 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3169,10 +3169,10 @@ static void intel_pmu_config_acr(int idx, u64 mask, u32 reload)
wrmsrl(msr_b + msr_offset, mask);
cpuc->acr_cfg_b[idx] = mask;
}
- /* Only need to update the reload value when there is a valid config value. */
- if (mask && cpuc->acr_cfg_c[idx] != reload) {
+ /* Only update CFG_C reload when ACR is actively enabled (mask != 0) */
+ if (mask && ((cpuc->cfg_c_val[idx] & ARCH_PEBS_RELOAD) != reload)) {
wrmsrl(msr_c + msr_offset, reload);
- cpuc->acr_cfg_c[idx] = reload;
+ cpuc->cfg_c_val[idx] = reload;
}
}
@@ -3198,14 +3198,15 @@ static void intel_pmu_enable_event_ext(struct perf_event *event)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
- union arch_pebs_index old, new;
- struct arch_pebs_cap cap;
u64 ext = 0;
- cap = hybrid(cpuc->pmu, arch_pebs_cap);
+ if (is_acr_event_group(event))
+ ext |= (-hwc->sample_period) & ARCH_PEBS_RELOAD;
if (event->attr.precise_ip) {
u64 pebs_data_cfg = intel_get_arch_pebs_data_config(event);
+ struct arch_pebs_cap cap = hybrid(cpuc->pmu, arch_pebs_cap);
+ union arch_pebs_index old, new;
ext |= ARCH_PEBS_EN;
if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 524668dcf4cc..40d6fe0afc4a 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -322,10 +322,8 @@ struct cpu_hw_events {
u64 fixed_ctrl_val;
u64 active_fixed_ctrl_val;
- /* Intel ACR configuration */
+ /* Intel ACR/arch-PEBS configuration */
u64 acr_cfg_b[X86_PMC_IDX_MAX];
- u64 acr_cfg_c[X86_PMC_IDX_MAX];
- /* Cached CFG_C values */
u64 cfg_c_val[X86_PMC_IDX_MAX];
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking
2026-04-27 8:55 ` [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking Dapeng Mi
@ 2026-04-28 13:00 ` Peter Zijlstra
2026-04-29 0:50 ` Mi, Dapeng
0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2026-04-28 13:00 UTC (permalink / raw)
To: Dapeng Mi
Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Namhyung Kim, Ian Rogers,
Adrian Hunter, Alexander Shishkin, Andi Kleen, Eranian Stephane,
linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao
On Mon, Apr 27, 2026 at 04:55:13PM +0800, Dapeng Mi wrote:
> Both Auto Counter Reload (ACR) and Architectural PEBS use the PERF_CFG_C
> MSRs to configure event behavior. Currently, the driver maintains two
> independent variables acr_cfg_c and cfg_c_val to cache the values intended
> for these MSRs.
>
> Using separate variables to track a single hardware register state is
> error-prone and can lead to configuration conflicts. Consolidate the
> tracking into a single cfg_c_val variable to ensure a unified and
> consistent view of the PERF_CFG_C MSR state.
>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
So the earlier patches deserve to be in perf/urgent, but this one
doesn't actually fix anything and goes in perf/core ?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking
2026-04-28 13:00 ` Peter Zijlstra
@ 2026-04-29 0:50 ` Mi, Dapeng
0 siblings, 0 replies; 7+ messages in thread
From: Mi, Dapeng @ 2026-04-29 0:50 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Namhyung Kim, Ian Rogers,
Adrian Hunter, Alexander Shishkin, Andi Kleen, Eranian Stephane,
linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao
On 4/28/2026 9:00 PM, Peter Zijlstra wrote:
> On Mon, Apr 27, 2026 at 04:55:13PM +0800, Dapeng Mi wrote:
>> Both Auto Counter Reload (ACR) and Architectural PEBS use the PERF_CFG_C
>> MSRs to configure event behavior. Currently, the driver maintains two
>> independent variables acr_cfg_c and cfg_c_val to cache the values intended
>> for these MSRs.
>>
>> Using separate variables to track a single hardware register state is
>> error-prone and can lead to configuration conflicts. Consolidate the
>> tracking into a single cfg_c_val variable to ensure a unified and
>> consistent view of the PERF_CFG_C MSR state.
>>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> So the earlier patches deserve to be in perf/urgent, but this one
> doesn't actually fix anything and goes in perf/core ?
Yes, I suppose so.
But Sashiko found a new corner issue.
"
If Event A moves to a new counter but Event B itself does not move,
match_prev_assignment() would evaluate to true for Event B, and
x86_pmu_start() would be skipped.
Does this mean the updated hw.config1 for Event B is never written to the
physical hardware MSR, breaking the Auto Counter Reload functionality?
"
and the Patch 1/4 can be further optimized and simplified.
I would post an new version patchset to fix all these.
Thanks.
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-04-29 0:51 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-27 8:55 [Patch v3 0/4] perf/x86/intel: Fix bugs of auto counter reload sampling Dapeng Mi
2026-04-27 8:55 ` [Patch v3 1/4] perf/x86/intel: Improve validation and configuration of ACR masks Dapeng Mi
2026-04-27 8:55 ` [Patch v3 2/4] perf/x86/intel: Disable PMI for self-reloaded ACR events Dapeng Mi
2026-04-27 8:55 ` [Patch v3 3/4] perf/x86/intel: Enable auto counter reload for DMR Dapeng Mi
2026-04-27 8:55 ` [Patch v3 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking Dapeng Mi
2026-04-28 13:00 ` Peter Zijlstra
2026-04-29 0:50 ` Mi, Dapeng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox