public inbox for linux-perf-users@vger.kernel.org
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Dapeng Mi" <dapeng1.mi@linux.intel.com>
Cc: linux-perf-users@vger.kernel.org
Subject: Re: [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs
Date: Sat, 25 Apr 2026 03:31:58 +0000	[thread overview]
Message-ID: <20260425033158.D3A29C19425@smtp.kernel.org> (raw)
In-Reply-To: <20260324004118.3772171-24-dapeng1.mi@linux.intel.com>

Sashiko AI review found 2 potential issue(s):
- [High] Incorrect bitwise operation in intel_pmu_drain_pebs_nhm() corrupts the processed events count.
- [High] Removing the unconditional `handled` increment causes valid PMIs to be reported as unhandled when `drain_pebs()` returns 0.
--

commit 0317f48597bdef9b4d93c696bde043840ac1fc6b
Author: Dapeng Mi <dapeng1.mi@linux.intel.com>

perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs

This patch modifies the PEBS handlers to return the count of actually
processed events instead of void. This count is used in the PMI handler
to activate back-to-back NMI detection and avoid "suspicious NMI" warnings.

> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c

[ ... ]

> @@ -3558,9 +3558,8 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
>  	if (__test_and_clear_bit(GLOBAL_STATUS_BUFFER_OVF_BIT, (unsigned long *)&status)) {
>  		u64 pebs_enabled = cpuc->pebs_enabled;
>  
> -		handled++;
>  		x86_pmu_handle_guest_pebs(regs, &data);
> -		static_call(x86_pmu_drain_pebs)(regs, &data);
> +		handled += static_call(x86_pmu_drain_pebs)(regs, &data);

If x86_pmu_drain_pebs() returns 0, will handled fail to increment even though
the hardware asserted an NMI and set the MSR status bit?

This might happen if the host PEBS buffer is empty, such as when a guest PEBS
event triggers the PMI and is handled by x86_pmu_handle_guest_pebs() right
above it. 

Could this cause the core NMI framework to treat the interrupt as unhandled
and trigger an unknown NMI warning? Is it necessary to ensure handled
increments by at least 1 when the status bit is explicitly acknowledged?

>  		/*
>  		 * PMI throttle may be triggered, which stops the PEBS event.

[ ... ]

> @@ -3587,8 +3586,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
>  	 */
>  	if (__test_and_clear_bit(GLOBAL_STATUS_ARCH_PEBS_THRESHOLD_BIT,
>  				 (unsigned long *)&status)) {
> -		handled++;
> -		static_call(x86_pmu_drain_pebs)(regs, &data);
> +		handled += static_call(x86_pmu_drain_pebs)(regs, &data);

Does this share the same issue as above, where a return value of 0 from
drain_pebs() would result in handled not being incremented for a valid PMI?

[ ... ]

> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c

[ ... ]

> @@ -3189,6 +3192,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
>  		if ((counts[bit] == 0) && (error[bit] == 0))
>  			continue;
>  
> +		events_bitmap |= bit;
>  		event = cpuc->events[bit];

Will this perform a bitwise OR with the integer index itself rather than
the expected bitmask?

Because bit is an integer index (e.g., 0 for PMC0, 3 for PMC3), if only PMC3
overflows, events_bitmap |= 3 (binary 011) results in hweight64(3) returning
2. If PMC0 overflows, events_bitmap |= 0 leaves the bitmap at 0.

Should this be events_bitmap |= BIT_ULL(bit) or 1ULL << bit to correctly
record the processed events?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260324004118.3772171-1-dapeng1.mi@linux.intel.com?part=23

  reply	other threads:[~2026-04-25  3:31 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-24  0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-03-24  0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
2026-03-24  0:40 ` [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS Dapeng Mi
2026-03-24  0:40 ` [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
2026-03-24  0:40 ` [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
2026-03-24  0:40 ` [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
2026-03-24  0:41 ` [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
2026-03-24  0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
2026-03-25  5:18   ` Mi, Dapeng
2026-03-24  0:41 ` [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
2026-03-24  0:41 ` [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
2026-03-24  0:41 ` [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
2026-03-24  0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
2026-03-25  7:30   ` Mi, Dapeng
2026-03-24  0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
2026-03-25  7:58   ` Mi, Dapeng
2026-03-24  0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
2026-03-25  8:44   ` Mi, Dapeng
2026-03-24  0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
2026-03-25  9:01   ` Mi, Dapeng
2026-03-24  0:41 ` [Patch v7 15/24] perf/x86: Enable YMM " Dapeng Mi
2026-03-24  0:41 ` [Patch v7 16/24] perf/x86: Enable ZMM " Dapeng Mi
2026-03-24  0:41 ` [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
2026-03-24  0:41 ` [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
2026-03-24  0:41 ` [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
2026-03-24  0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
2026-03-25  9:25   ` Mi, Dapeng
2026-03-24  0:41 ` [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
2026-04-25  2:01   ` sashiko-bot
2026-03-24  0:41 ` [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
2026-04-25  3:08   ` sashiko-bot
2026-03-24  0:41 ` [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
2026-04-25  3:31   ` sashiko-bot [this message]
2026-03-24  0:41 ` [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size Dapeng Mi
2026-04-25  3:53   ` sashiko-bot
2026-03-24  1:08 ` [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
2026-03-25  9:41 ` Mi, Dapeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260425033158.D3A29C19425@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=dapeng1.mi@linux.intel.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox