From: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
To: sashiko@lists.linux.dev
Cc: linux-perf-users@vger.kernel.org
Subject: Re: [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling
Date: Wed, 29 Apr 2026 13:36:28 +0800 [thread overview]
Message-ID: <9bd99db1-9761-4b15-9f6d-d7bd45723255@linux.intel.com> (raw)
In-Reply-To: <20260425030814.0A75EC19425@smtp.kernel.org>
On 4/25/2026 11:08 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 2 potential issue(s):
> - [Critical] Leaking stale per-CPU FPU/extended register state to user space when a PEBS event triggers in kernel mode and requests PERF_SAMPLE_REGS_USER.
> - [High] Incorrect bitwise AND with a raw u64 bitmask in `intel_pebs_support_regs` permanently disables large PEBS for XMM registers on arch-PEBS platforms.
> --
>
> commit 7ad07fe08e2e0b394952e6f0ccf5f040f99870e3
> Author: Dapeng Mi <dapeng1.mi@linux.intel.com>
>
> perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling
>
> This patch enables arch-PEBS based SIMD/eGPRs/SSP registers sampling
> using the XSAVE-Enabled Registers (XER) group.
>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 3d64f28b7e9ac..b280a39f68bca 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
> [ ... ]
>
>> @@ -4416,6 +4431,34 @@ static void intel_pebs_aliases_skl(struct perf_event *event)
>> return intel_pebs_aliases_precdist(event);
>> }
>>
>> +static inline bool intel_pebs_support_regs(struct perf_event *event, u64 regs)
>> +{
>> + struct arch_pebs_cap cap = hybrid(event->pmu, arch_pebs_cap);
>> + int pebs_format = x86_pmu.intel_cap.pebs_format;
>> + bool supported = true;
>> +
>> + /* SSP */
>> + if (regs & PEBS_DATACFG_GP)
>> + supported &= x86_pmu.arch_pebs && (ARCH_PEBS_GPR & cap.caps);
>> + if (regs & PEBS_DATACFG_XMMS) {
>> + supported &= x86_pmu.arch_pebs ?
>> + ARCH_PEBS_VECR_XMM & cap.caps :
>> + pebs_format > 3 && x86_pmu.intel_cap.pebs_baseline;
>> + }
> Will this permanently disable large PEBS for XMM registers?
>
> Because supported is a boolean (initially 1), ARCH_PEBS_VECR_XMM & cap.caps
> evaluates to BIT_ULL(49). The bitwise AND operation 1 &= 0x2000000000000ULL
> strictly yields 0.
>
> Should this bitmask be normalized to a boolean value, perhaps using
> !!(ARCH_PEBS_VECR_XMM & cap.caps) inside the ternary operator?
Yes, it's a bug. would fix it in next version.
>
> [ ... ]
>
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index 560f178d7ac92..61d72c58f9a25 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
> [ ... ]
>
>> @@ -2708,15 +2719,69 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
>> meminfo->tsx_tuning, ax);
>> }
>>
>> - if (header->xmm) {
>> + if (header->xmm || header->ymmh || header->egpr ||
>> + header->opmask || header->zmmh || header->h16zmm) {
>> + struct arch_pebs_xer_header *xer_header = next_record;
>> struct pebs_xmm *xmm;
>> + struct ymmh_struct *ymmh;
>> + struct avx_512_zmm_uppers_state *zmmh;
>> + struct avx_512_hi16_state *h16zmm;
>> + struct avx_512_opmask_state *opmask;
>> + struct apx_state *egpr;
>>
>> next_record += sizeof(struct arch_pebs_xer_header);
>>
>> - ignore_mask |= XFEATURE_MASK_SSE;
>> - xmm = next_record;
>> - perf_regs->xmm_regs = xmm->xmm;
>> - next_record = xmm + 1;
>> + if (header->xmm) {
>> + ignore_mask |= XFEATURE_MASK_SSE;
>> + xmm = next_record;
>> + /*
>> + * Only output XMM regs to user space when arch-PEBS
>> + * really writes data into xstate area.
>> + */
>> + if (xer_header->xstate & XFEATURE_MASK_SSE)
>> + perf_regs->xmm_regs = xmm->xmm;
>> + next_record = xmm + 1;
>> + }
> Is it possible for this code to leak stale per-CPU FPU/extended register
> state to user space?
>
> When a PEBS event triggers in kernel mode and requests PERF_SAMPLE_REGS_USER,
> this code processes the PEBS data and sets the corresponding bits
> (e.g., XFEATURE_MASK_SSE) in ignore_mask. It then calls
> x86_pmu_setup_regs_data(), which assigns data->regs_user.regs to the
> static per-CPU x86_user_regs.
>
> If x86_user_regs is never fully cleared, its extended register pointers
> (like xmm_regs) still point to the per-CPU ext_regs_buf from prior samples.
> Because XFEATURE_MASK_SSE is excluded via ignore_mask, xsaves_nmi() and
> x86_pmu_update_xregs() skip updating x86_user_regs.xmm_regs.
>
> Consequently, the xmm_regs pointer remains pointing to the stale data in
> ext_regs_buf. When the perf core outputs the user registers, could it
> read the stale FPU state from ext_regs_buf and leak it across processes?
Yes, it's a known issue. Would be fixed in next version.
Thanks.
>
next prev parent reply other threads:[~2026-04-29 5:36 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-03-24 0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
2026-03-24 0:40 ` [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS Dapeng Mi
2026-03-24 0:40 ` [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
2026-03-24 0:40 ` [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
2026-03-24 0:40 ` [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
2026-03-24 0:41 ` [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
2026-03-24 0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
2026-03-25 5:18 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
2026-03-24 0:41 ` [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
2026-03-24 0:41 ` [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
2026-03-24 0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
2026-03-25 7:30 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
2026-03-25 7:58 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
2026-03-25 8:44 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
2026-03-25 9:01 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 15/24] perf/x86: Enable YMM " Dapeng Mi
2026-03-24 0:41 ` [Patch v7 16/24] perf/x86: Enable ZMM " Dapeng Mi
2026-03-24 0:41 ` [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
2026-03-24 0:41 ` [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
2026-03-24 0:41 ` [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
2026-03-24 0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
2026-03-25 9:25 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
2026-04-25 2:01 ` sashiko-bot
2026-04-29 5:25 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
2026-04-25 3:08 ` sashiko-bot
2026-04-29 5:36 ` Mi, Dapeng [this message]
2026-03-24 0:41 ` [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
2026-04-25 3:31 ` sashiko-bot
2026-04-29 6:00 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size Dapeng Mi
2026-04-25 3:53 ` sashiko-bot
2026-04-29 7:04 ` Mi, Dapeng
2026-03-24 1:08 ` [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
2026-03-25 9:41 ` Mi, Dapeng
2026-05-13 5:52 ` Mi, Dapeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9bd99db1-9761-4b15-9f6d-d7bd45723255@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=linux-perf-users@vger.kernel.org \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox