From: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
Jiri Olsa <jolsa@kernel.org>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>,
Eranian Stephane <eranian@google.com>,
Mark Rutland <mark.rutland@arm.com>,
broonie@kernel.org, Ravi Bangoria <ravi.bangoria@amd.com>,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
Zide Chen <zide.chen@intel.com>,
Falcon Thomas <thomas.falcon@intel.com>,
Dapeng Mi <dapeng1.mi@intel.com>,
Xudong Hao <xudong.hao@intel.com>
Subject: Re: [Patch v6 01/22] perf/x86/intel: Restrict PEBS_ENABLE writes to PEBS-capable counters
Date: Wed, 11 Feb 2026 13:47:35 +0800 [thread overview]
Message-ID: <586f9204-1b37-422c-b964-dc62f720aa41@linux.intel.com> (raw)
In-Reply-To: <20260210153643.GA3931095@noisy.programming.kicks-ass.net>
On 2/10/2026 11:36 PM, Peter Zijlstra wrote:
> On Mon, Feb 09, 2026 at 03:20:26PM +0800, Dapeng Mi wrote:
>> Before the introduction of extended PEBS, PEBS supported only
>> general-purpose (GP) counters. In a virtual machine (VM) environment,
>> the PEBS_BASELINE bit in PERF_CAPABILITIES may not be set, but the PEBS
>> format could be indicated as 4 or higher. In such cases, PEBS events
>> might be scheduled to fixed counters, and writing the corresponding bits
>> into the PEBS_ENABLE MSR could cause a #GP fault.
>>
>> To prevent writing unsupported bits into the PEBS_ENABLE MSR, ensure
>> cpuc->pebs_enabled aligns with x86_pmu.pebs_capable and restrict the
>> writes to only PEBS-capable counter bits.
> This seems very wrong. Should we not avoid getting those bits set in the
> first place?
Hmm, yes. I originally thought it's fine to block the access these invalid
bits in PEBS_ENABLE MSR, but I agree it should be blocked as early as possible.
Currently the intel_pebs_constraints() helper doesn't check if the returned
matched PEBS constraint contains the fixed counter indexes when extended
PEBS is not supported.
We may need the below change (just build and not tested yet).
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 94ada08360f1..bc36808bdb7b 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1557,6 +1557,14 @@ struct event_constraint
*intel_pebs_constraints(struct perf_event *event)
if (pebs_constraints) {
for_each_event_constraint(c, pebs_constraints) {
if (constraint_match(c, event->hw.config)) {
+ /*
+ * If fixed counters are suggested in the
constraints,
+ * but extended PEBS is not supported,
emptyconstraint
+ * should be returned.
+ */
+ if ((c->idxmsk64 & ~PEBS_COUNTER_MASK) &&
+ !(x86_pmu.flags & PMU_FL_PEBS_ALL))
+ break;
event->hw.flags |= c->flags;
return c;
}
Thanks.
>
> That is; the fact that we set those cpuc->pebs_enabled bits indicates
> that we 'successfully' scheduled PEBS counters. And then we silently
> disable PEBS when programming the hardware.
>
> Or am I reading this wrong?
>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>> ---
>>
>> V6: new patch.
>>
>> arch/x86/events/intel/core.c | 6 ++++--
>> arch/x86/events/intel/ds.c | 11 +++++++----
>> 2 files changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index f3ae1f8ee3cd..546ebc7e1624 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3554,8 +3554,10 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
>> * cpuc->enabled has been forced to 0 in PMI.
>> * Update the MSR if pebs_enabled is changed.
>> */
>> - if (pebs_enabled != cpuc->pebs_enabled)
>> - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>> + if (pebs_enabled != cpuc->pebs_enabled) {
>> + wrmsrq(MSR_IA32_PEBS_ENABLE,
>> + cpuc->pebs_enabled & x86_pmu.pebs_capable);
>> + }
>>
>> /*
>> * Above PEBS handler (PEBS counters snapshotting) has updated fixed
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index 5027afc97b65..57805c6ba0c3 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
>> @@ -1963,6 +1963,7 @@ void intel_pmu_pebs_disable(struct perf_event *event)
>> {
>> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> struct hw_perf_event *hwc = &event->hw;
>> + u64 pebs_enabled;
>>
>> __intel_pmu_pebs_disable(event);
>>
>> @@ -1974,16 +1975,18 @@ void intel_pmu_pebs_disable(struct perf_event *event)
>>
>> intel_pmu_pebs_via_pt_disable(event);
>>
>> - if (cpuc->enabled)
>> - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>> + pebs_enabled = cpuc->pebs_enabled & x86_pmu.pebs_capable;
>> + if (pebs_enabled)
>> + wrmsrq(MSR_IA32_PEBS_ENABLE, pebs_enabled);
>> }
>>
>> void intel_pmu_pebs_enable_all(void)
>> {
>> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> + u64 pebs_enabled = cpuc->pebs_enabled & x86_pmu.pebs_capable;
>>
>> - if (cpuc->pebs_enabled)
>> - wrmsrq(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
>> + if (pebs_enabled)
>> + wrmsrq(MSR_IA32_PEBS_ENABLE, pebs_enabled);
>> }
>>
>> void intel_pmu_pebs_disable_all(void)
>> --
>> 2.34.1
>>
next prev parent reply other threads:[~2026-02-11 5:47 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-09 7:20 [Patch v6 00/22] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-02-09 7:20 ` [Patch v6 01/22] perf/x86/intel: Restrict PEBS_ENABLE writes to PEBS-capable counters Dapeng Mi
2026-02-10 15:36 ` Peter Zijlstra
2026-02-11 5:47 ` Mi, Dapeng [this message]
2026-02-09 7:20 ` [Patch v6 02/22] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
2026-02-09 7:20 ` [Patch v6 03/22] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
2026-02-09 7:20 ` [Patch v6 04/22] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
2026-02-09 7:20 ` [Patch v6 05/22] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
2026-02-10 18:40 ` Peter Zijlstra
2026-02-11 6:26 ` Mi, Dapeng
2026-02-09 7:20 ` [Patch v6 06/22] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
2026-02-09 7:20 ` [Patch v6 07/22] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
2026-02-09 7:20 ` [Patch v6 08/22] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
2026-02-11 19:39 ` Chang S. Bae
2026-02-11 19:55 ` Dave Hansen
2026-02-24 6:50 ` Mi, Dapeng
2026-02-25 13:02 ` Peter Zijlstra
2026-02-24 5:35 ` Mi, Dapeng
2026-02-24 19:13 ` Chang S. Bae
2026-02-25 0:35 ` Mi, Dapeng
2026-02-09 7:20 ` [Patch v6 09/22] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
2026-02-09 7:20 ` [Patch v6 10/22] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
2026-02-15 23:58 ` Chang S. Bae
2026-02-24 7:11 ` Mi, Dapeng
2026-02-24 19:13 ` Chang S. Bae
2026-02-25 0:55 ` Mi, Dapeng
2026-02-25 1:11 ` Chang S. Bae
2026-02-25 1:36 ` Mi, Dapeng
2026-02-25 3:14 ` Chang S. Bae
2026-02-25 6:13 ` Mi, Dapeng
2026-02-09 7:20 ` [Patch v6 11/22] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
2026-02-09 7:20 ` [Patch v6 12/22] perf: Add sampling support for SIMD registers Dapeng Mi
2026-02-10 20:04 ` Peter Zijlstra
2026-02-11 6:56 ` Mi, Dapeng
2026-02-09 7:20 ` [Patch v6 13/22] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
2026-02-09 7:20 ` [Patch v6 14/22] perf/x86: Enable YMM " Dapeng Mi
2026-02-09 7:20 ` [Patch v6 15/22] perf/x86: Enable ZMM " Dapeng Mi
2026-02-09 7:20 ` [Patch v6 16/22] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
2026-02-09 7:20 ` [Patch v6 17/22] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
2026-02-09 7:20 ` [Patch v6 18/22] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
2026-02-09 7:20 ` [Patch v6 19/22] perf/x86: Enable SSP " Dapeng Mi
2026-02-09 7:20 ` [Patch v6 20/22] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
2026-02-09 7:20 ` [Patch v6 21/22] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
2026-02-09 7:20 ` [Patch v6 22/22] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
2026-02-09 8:48 ` [Patch v6 00/22] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=586f9204-1b37-422c-b964-dc62f720aa41@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=broonie@kernel.org \
--cc=dapeng1.mi@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@amd.com \
--cc=tglx@linutronix.de \
--cc=thomas.falcon@intel.com \
--cc=xudong.hao@intel.com \
--cc=zide.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox