From: Dapeng Mi <dapeng1.mi@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
Jiri Olsa <jolsa@kernel.org>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>,
Eranian Stephane <eranian@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
broonie@kernel.org, Ravi Bangoria <ravi.bangoria@amd.com>,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
Zide Chen <zide.chen@intel.com>,
Falcon Thomas <thomas.falcon@intel.com>,
Dapeng Mi <dapeng1.mi@intel.com>,
Xudong Hao <xudong.hao@intel.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>
Subject: [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size
Date: Tue, 24 Mar 2026 08:41:18 +0800 [thread overview]
Message-ID: <20260324004118.3772171-25-dapeng1.mi@linux.intel.com> (raw)
In-Reply-To: <20260324004118.3772171-1-dapeng1.mi@linux.intel.com>
Prevent potential infinite loops by adding a sanity check for the
corrupted PEBS fragment sizes which could happen in theory.
If a corrupted PEBS fragment is detected, the entire PEBS record
including the fragment and all subsequent records will be discarded.
This ensures the integrity of PEBS data and prevents infinite loops
in setup_arch_pebs_sample_data() again.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: new patch.
arch/x86/events/intel/ds.c | 33 +++++++++++++++++++++++----------
1 file changed, 23 insertions(+), 10 deletions(-)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 6e1c516122c0..4b0dd8379737 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2819,7 +2819,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
}
/* Parse followed fragments if there are. */
- if (arch_pebs_record_continued(header)) {
+ if (arch_pebs_record_continued(header) && header->size) {
at = at + header->size;
goto again;
}
@@ -2948,13 +2948,17 @@ __intel_pmu_pebs_last_event(struct perf_event *event,
struct pt_regs *iregs,
struct pt_regs *regs,
struct perf_sample_data *data,
- void *at,
- int count,
+ void *at, int count, bool corrupted,
setup_fn setup_sample)
{
struct hw_perf_event *hwc = &event->hw;
- setup_sample(event, iregs, at, data, regs);
+ /* Skip parsing corrupted PEBS record. */
+ if (corrupted)
+ perf_sample_data_init(data, 0, event->hw.last_period);
+ else
+ setup_sample(event, iregs, at, data, regs);
+
if (iregs == &dummy_iregs) {
/*
* The PEBS records may be drained in the non-overflow context,
@@ -3026,13 +3030,15 @@ __intel_pmu_pebs_events(struct perf_event *event,
iregs = &dummy_iregs;
while (cnt > 1) {
- __intel_pmu_pebs_event(event, iregs, regs, data, at, setup_sample);
+ __intel_pmu_pebs_event(event, iregs, regs, data,
+ at, setup_sample);
at += cpuc->pebs_record_size;
at = get_next_pebs_record_by_bit(at, top, bit);
cnt--;
}
- __intel_pmu_pebs_last_event(event, iregs, regs, data, at, count, setup_sample);
+ __intel_pmu_pebs_last_event(event, iregs, regs, data, at,
+ count, false, setup_sample);
}
static int intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_data *data)
@@ -3247,7 +3253,8 @@ static __always_inline void
__intel_pmu_handle_last_pebs_record(struct pt_regs *iregs,
struct pt_regs *regs,
struct perf_sample_data *data,
- u64 mask, short *counts, void **last,
+ u64 mask, short *counts,
+ void **last, bool corrupted,
setup_fn setup_sample)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
@@ -3261,7 +3268,7 @@ __intel_pmu_handle_last_pebs_record(struct pt_regs *iregs,
event = cpuc->events[bit];
__intel_pmu_pebs_last_event(event, iregs, regs, data, last[bit],
- counts[bit], setup_sample);
+ counts[bit], corrupted, setup_sample);
}
}
@@ -3317,7 +3324,7 @@ static int intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_da
}
__intel_pmu_handle_last_pebs_record(iregs, regs, data, mask, counts, last,
- setup_pebs_adaptive_sample_data);
+ false, setup_pebs_adaptive_sample_data);
return hweight64(events_bitmap);
}
@@ -3333,6 +3340,7 @@ static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
struct pt_regs *regs = &perf_regs->regs;
void *base, *at, *top;
u64 events_bitmap = 0;
+ bool corrupted = false;
u64 mask;
rdmsrq(MSR_IA32_PEBS_INDEX, index.whole);
@@ -3388,6 +3396,10 @@ static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
if (!header->size)
break;
at += header->size;
+ if (WARN_ON_ONCE(at >= top)) {
+ corrupted = true;
+ goto done;
+ }
header = at;
}
@@ -3395,8 +3407,9 @@ static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
at += header->size;
}
+done:
__intel_pmu_handle_last_pebs_record(iregs, regs, data, mask,
- counts, last,
+ counts, last, corrupted,
setup_arch_pebs_sample_data);
return hweight64(events_bitmap);
--
2.34.1
next prev parent reply other threads:[~2026-03-24 0:47 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-03-24 0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
2026-03-24 0:40 ` [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS Dapeng Mi
2026-03-24 0:40 ` [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
2026-03-24 0:40 ` [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
2026-03-24 0:40 ` [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
2026-03-24 0:41 ` [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
2026-03-24 0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
2026-03-25 5:18 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
2026-03-24 0:41 ` [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
2026-03-24 0:41 ` [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
2026-03-24 0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
2026-03-25 7:30 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
2026-03-25 7:58 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
2026-03-25 8:44 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
2026-03-25 9:01 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 15/24] perf/x86: Enable YMM " Dapeng Mi
2026-03-24 0:41 ` [Patch v7 16/24] perf/x86: Enable ZMM " Dapeng Mi
2026-03-24 0:41 ` [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
2026-03-24 0:41 ` [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
2026-03-24 0:41 ` [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
2026-03-24 0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
2026-03-25 9:25 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
2026-03-24 0:41 ` [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
2026-03-24 0:41 ` [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
2026-03-24 0:41 ` Dapeng Mi [this message]
2026-03-24 1:08 ` [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
2026-03-25 9:41 ` Mi, Dapeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260324004118.3772171-25-dapeng1.mi@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=broonie@kernel.org \
--cc=dapeng1.mi@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@amd.com \
--cc=tglx@linutronix.de \
--cc=thomas.falcon@intel.com \
--cc=xudong.hao@intel.com \
--cc=zide.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox