* [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf
@ 2026-03-24 0:40 Dapeng Mi
2026-03-24 0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
` (25 more replies)
0 siblings, 26 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:40 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
Changes since V6:
- Fix potential overwritten issue in hybrid PMU structure (patch 01/24)
- Restrict PEBS events work on GP counters if no PEBS baseline suggested
(patch 02/24)
- Use per-cpu x86_intr_regs for perf_event_nmi_handler() instead of
temporary variable (patch 06/24)
- Add helper update_fpu_state_and_flag() to ensure TIF_NEED_FPU_LOAD is
set after save_fpregs_to_fpstate() call (patch 09/24)
- Optimize and simplify x86_pmu_sample_xregs(), etc. (patch 11/24)
- Add macro word_for_each_set_bit() to simplify u64 set-bit iteration
(patch 13/24)
- Add sanity check for PEBS fragment size (patch 24/24)
Changes since V5:
- Introduce 3 commits to fix newly found PEBS issues (Patch 01~03/19)
- Address Peter comments, including,
* Fully support user-regs sampling of the SIMD/eGPRs/SSP registers
* Adjust newly added fields in perf_event_attr to avoid holes
* Fix the endian issue introduced by for_each_set_bit() in
event/core.c
* Remove some unnecessary macros from UAPI header perf_regs.h
* Enhance b2b NMI detection for all PEBS handlers to ensure identical
behaviors of all PEBS handlers
- Split perf-tools patches which would be posted in a separate patchset
later
Changes since V4:
- Rewrite some functions comments and commit messages (Dave)
- Add arch-PEBS based SIMD/eGPRs/SSP sampling support (Patch 15/19)
- Fix "suspecious NMI" warnning observed on PTL/NVL P-core and DMR by
activating back-to-back NMI detection mechanism (Patch 16/19)
- Fix some minor issues on perf-tool patches (Patch 18/19)
Changes since V3:
- Drop the SIMD registers if an NMI hits kernel mode for REGS_USER.
- Only dump the available regs, rather than zero and dump the
unavailable regs. It's possible that the dumped registers are a subset
of the requested registers.
- Some minor updates to address Dapeng's comments in V3.
Changes since V2:
- Use the FPU format for the x86_pmu.ext_regs_mask as well
- Add a check before invoking xsaves_nmi()
- Add perf_simd_reg_check() to retrieve the number of available
registers. If the kernel fails to get the requested registers, e.g.,
XSAVES fails, nothing dumps to the userspace (the V2 dumps all 0s).
- Add POC perf tool patches
Changes since V1:
- Apply the new interfaces to configure and dump the SIMD registers
- Utilize the existing FPU functions, e.g., xstate_calculate_size,
get_xsave_addr().
Starting from Intel Ice Lake, XMM registers can be collected in a PEBS
record. Future Architecture PEBS will include additional registers such
as YMM, ZMM, OPMASK, SSP and APX eGPRs, contingent on hardware support.
This patch set introduces a software solution to mitigate the hardware
requirement by utilizing the XSAVES command to retrieve the requested
registers in the overflow handler. This feature is no longer limited to
PEBS events or specific platforms. While the hardware solution remains
preferable due to its lower overhead and higher accuracy, this software
approach provides a viable alternative.
The solution is theoretically compatible with all x86 platforms but is
currently enabled on newer platforms, including Sapphire Rapids and
later P-core server platforms, Sierra Forest and later E-core server
platforms and recent Client platforms, like Arrow Lake, Panther Lake and
Nova Lake.
Newly supported registers include YMM, ZMM, OPMASK, SSP, and APX eGPRs.
Due to space constraints in sample_regs_user/intr, new fields have been
introduced in the perf_event_attr structure to accommodate these
registers.
After a long discussion in V1,
https://lore.kernel.org/lkml/3f1c9a9e-cb63-47ff-a5e9-06555fa6cc9a@linux.intel.com/
The below new fields are introduced.
@@ -547,6 +549,25 @@ struct perf_event_attr {
__u64 config3; /* extension of config2 */
__u64 config4; /* extension of config3 */
+
+ /*
+ * Defines set of SIMD registers to dump on samples.
+ * The sample_simd_regs_enabled !=0 implies the
+ * set of SIMD registers is used to config all SIMD registers.
+ * If !sample_simd_regs_enabled, sample_regs_XXX may be used to
+ * config some SIMD registers on X86.
+ */
+ union {
+ __u16 sample_simd_regs_enabled;
+ __u16 sample_simd_pred_reg_qwords;
+ };
+ __u16 sample_simd_vec_reg_qwords;
+ __u32 __reserved_4;
+
+ __u32 sample_simd_pred_reg_intr;
+ __u32 sample_simd_pred_reg_user;
+ __u64 sample_simd_vec_reg_intr;
+ __u64 sample_simd_vec_reg_user;
};
/*
@@ -1020,7 +1041,15 @@ enum perf_event_type {
* } && PERF_SAMPLE_BRANCH_STACK
*
* { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
+ * u64 regs[weight(mask)];
+ * struct {
+ * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_user)
+ * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
+ * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_user)
+ * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
+ * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
+ * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ * } && PERF_SAMPLE_REGS_USER
*
* { u64 size;
* char data[size];
@@ -1047,7 +1076,15 @@ enum perf_event_type {
* { u64 data_src; } && PERF_SAMPLE_DATA_SRC
* { u64 transaction; } && PERF_SAMPLE_TRANSACTION
* { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
+ * u64 regs[weight(mask)];
+ * struct {
+ * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_intr)
+ * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
+ * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_intr)
+ * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
+ * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
+ * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ * } && PERF_SAMPLE_REGS_INTR
* { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
* { u64 cgroup;} && PERF_SAMPLE_CGROUP
* { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
To maintain simplicity, a single field, sample_{simd|pred}_vec_reg_qwords,
is introduced to indicate register width. For example:
- sample_simd_vec_reg_qwords = 2 for XMM registers (128 bits) on x86
- sample_simd_vec_reg_qwords = 4 for YMM registers (256 bits) on x86
Four additional fields, sample_{simd|pred}_vec_reg_{intr|user}, represent
the bitmap of sampling registers. For instance, the bitmap for x86
XMM registers is 0xffff (16 XMM registers). Although users can
theoretically sample a subset of registers, the current perf-tool
implementation supports sampling all registers of each type to avoid
complexity.
A new ABI, PERF_SAMPLE_REGS_ABI_SIMD, is introduced to signal user space
tools about the presence of SIMD registers in sampling records. When this
flag is detected, tools should recognize that extra SIMD register data
follows the general register data. The layout of the extra SIMD register
data is displayed as follow.
u16 nr_vectors;
u16 vector_qwords;
u16 nr_pred;
u16 pred_qwords;
u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
With this patch set, sampling for the aforementioned registers is
supported on the Intel Nova Lake platform.
Examples:
$perf record -I?
available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10
R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28
R29 R30 R31 SSP XMM0-15 YMM0-15 ZMM0-31 OPMASK0-7
$perf record --user-regs=?
available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10
R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28
R29 R30 R31 SSP XMM0-15 YMM0-15 ZMM0-31 OPMASK0-7
$perf record -e branches:p -Iax,bx,r8,r16,r31,ssp,xmm,ymm,zmm,opmask -c 100000 ./test
$perf report -D
... ...
14027761992115 0xcf30 [0x8a8]: PERF_RECORD_SAMPLE(IP, 0x1): 29964/29964:
0xffffffff9f085e24 period: 100000 addr: 0
... intr regs: mask 0x18001010003 ABI 64-bit
.... AX 0xdffffc0000000000
.... BX 0xffff8882297685e8
.... R8 0x0000000000000000
.... R16 0x0000000000000000
.... R31 0x0000000000000000
.... SSP 0x0000000000000000
... SIMD ABI nr_vectors 32 vector_qwords 8 nr_pred 8 pred_qwords 1
.... ZMM [0] 0xffffffffffffffff
.... ZMM [0] 0x0000000000000001
.... ZMM [0] 0x0000000000000000
.... ZMM [0] 0x0000000000000000
.... ZMM [0] 0x0000000000000000
.... ZMM [0] 0x0000000000000000
.... ZMM [0] 0x0000000000000000
.... ZMM [0] 0x0000000000000000
.... ZMM [1] 0x003a6b6165506d56
... ...
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... ZMM [31] 0x0000000000000000
.... OPMASK[0] 0x00000000fffffe00
.... OPMASK[1] 0x0000000000ffffff
.... OPMASK[2] 0x000000000000007f
.... OPMASK[3] 0x0000000000000000
.... OPMASK[4] 0x0000000000010080
.... OPMASK[5] 0x0000000000000000
.... OPMASK[6] 0x0000400004000000
.... OPMASK[7] 0x0000000000000000
... ...
History:
v6: https://lore.kernel.org/all/20260209072047.2180332-1-dapeng1.mi@linux.intel.com/
v5: https://lore.kernel.org/all/20251203065500.2597594-1-dapeng1.mi@linux.intel.com/
v4: https://lore.kernel.org/all/20250925061213.178796-1-dapeng1.mi@linux.intel.com/
v3: https://lore.kernel.org/lkml/20250815213435.1702022-1-kan.liang@linux.intel.com/
v2: https://lore.kernel.org/lkml/20250626195610.405379-1-kan.liang@linux.intel.com/
v1: https://lore.kernel.org/lkml/20250613134943.3186517-1-kan.liang@linux.intel.com/
Dapeng Mi (12):
perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu()
perf/x86/intel: Avoid PEBS event on fixed counters without extended
PEBS
perf/x86/intel: Enable large PEBS sampling for XMMs
perf/x86/intel: Convert x86_perf_regs to per-cpu variables
perf: Eliminate duplicate arch-specific functions definations
x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state
perf/x86: Enable XMM Register Sampling for Non-PEBS Events
perf/x86: Enable XMM register sampling for REGS_USER case
perf: Enhance perf_reg_validate() with simd_enabled argument
perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling
perf/x86: Activate back-to-back NMI detection for arch-PEBS induced
NMIs
perf/x86/intel: Add sanity check for PEBS fragment size
Kan Liang (12):
perf/x86: Use x86_perf_regs in the x86 nmi handler
perf/x86: Introduce x86-specific x86_pmu_setup_regs_data()
x86/fpu/xstate: Add xsaves_nmi() helper
perf: Move and rename has_extended_regs() for ARCH-specific use
perf: Add sampling support for SIMD registers
perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields
perf/x86: Enable YMM sampling using sample_simd_vec_reg_* fields
perf/x86: Enable ZMM sampling using sample_simd_vec_reg_* fields
perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields
perf/x86: Enable eGPRs sampling using sample_regs_* fields
perf/x86: Enable SSP sampling using sample_regs_* fields
perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability
arch/arm/kernel/perf_regs.c | 8 +-
arch/arm64/kernel/perf_regs.c | 8 +-
arch/csky/kernel/perf_regs.c | 8 +-
arch/loongarch/kernel/perf_regs.c | 8 +-
arch/mips/kernel/perf_regs.c | 8 +-
arch/parisc/kernel/perf_regs.c | 8 +-
arch/powerpc/perf/perf_regs.c | 2 +-
arch/riscv/kernel/perf_regs.c | 8 +-
arch/s390/kernel/perf_regs.c | 2 +-
arch/x86/events/core.c | 392 +++++++++++++++++++++++++-
arch/x86/events/intel/core.c | 127 ++++++++-
arch/x86/events/intel/ds.c | 195 ++++++++++---
arch/x86/events/perf_event.h | 85 +++++-
arch/x86/include/asm/fpu/sched.h | 5 +-
arch/x86/include/asm/fpu/xstate.h | 3 +
arch/x86/include/asm/msr-index.h | 7 +
arch/x86/include/asm/perf_event.h | 38 ++-
arch/x86/include/uapi/asm/perf_regs.h | 51 ++++
arch/x86/kernel/fpu/core.c | 27 +-
arch/x86/kernel/fpu/xstate.c | 25 +-
arch/x86/kernel/perf_regs.c | 134 +++++++--
include/linux/perf_event.h | 16 ++
include/linux/perf_regs.h | 36 +--
include/uapi/linux/perf_event.h | 50 +++-
kernel/events/core.c | 138 +++++++--
tools/perf/util/header.c | 3 +-
26 files changed, 1193 insertions(+), 199 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu()
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
@ 2026-03-24 0:40 ` Dapeng Mi
2026-03-24 0:40 ` [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS Dapeng Mi
` (24 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:40 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
The current approach initializes hybrid PMU structures immediately before
registering them. This is risky as it can lead to key fields, such as
'capabilities', being inadvertently overwritten.
Although no issues have arisen so far, this method is not ideal. It makes
the PMU structure fields susceptible to being overwritten, especially with
future changes that might initialize fields like 'capabilities' within
init_hybrid_pmu() called by x86_pmu_starting_cpu().
To mitigate this potential problem, move the default hybrid structure
initialization before calling x86_pmu_starting_cpu().
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: new patch.
arch/x86/events/core.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 03ce1bc7ef2e..67883cf1d675 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2189,8 +2189,20 @@ static int __init init_hw_perf_events(void)
pmu.attr_update = x86_pmu.attr_update;
- if (!is_hybrid())
+ if (!is_hybrid()) {
x86_pmu_show_pmu_cap(NULL);
+ } else {
+ int i;
+
+ /*
+ * Init default ops.
+ * Must be called before registering x86_pmu_starting_cpu(),
+ * otherwise some key PMU fields, e.g., capabilities
+ * initialized in x86_pmu_starting_cpu(), would be overwritten.
+ */
+ for (i = 0; i < x86_pmu.num_hybrid_pmus; i++)
+ x86_pmu.hybrid_pmu[i].pmu = pmu;
+ }
if (!x86_pmu.read)
x86_pmu.read = _x86_pmu_read;
@@ -2237,7 +2249,6 @@ static int __init init_hw_perf_events(void)
for (i = 0; i < x86_pmu.num_hybrid_pmus; i++) {
hybrid_pmu = &x86_pmu.hybrid_pmu[i];
- hybrid_pmu->pmu = pmu;
hybrid_pmu->pmu.type = -1;
hybrid_pmu->pmu.attr_update = x86_pmu.attr_update;
hybrid_pmu->pmu.capabilities |= PERF_PMU_CAP_EXTENDED_HW_TYPE;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-03-24 0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
@ 2026-03-24 0:40 ` Dapeng Mi
2026-03-24 0:40 ` [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
` (23 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:40 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi, Yi Lai
Before the introduction of extended PEBS, PEBS supported only
general-purpose (GP) counters. In a virtual machine (VM) environment,
the PEBS_BASELINE bit in PERF_CAPABILITIES may not be set, but the PEBS
format could be indicated as 4 or higher. In such cases, PEBS events
might be scheduled to fixed counters, and writing the corresponding bits
into the PEBS_ENABLE MSR could cause a #GP fault.
To fix this issue, enhance intel_pebs_constraints() to avoid scheduling
PEBS events on fixed counters if extended PEBS is not supported.
Reported-by: Yi Lai <yi1.lai@intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V2: Restrict PEBS events work on only GP counters if no PEBS-baseline
suggested instead of limiting cpuc->pebs_enabled to PEBS capable
counters in v1.
arch/x86/events/intel/ds.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 5027afc97b65..49af127bff68 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1557,6 +1557,14 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event)
if (pebs_constraints) {
for_each_event_constraint(c, pebs_constraints) {
if (constraint_match(c, event->hw.config)) {
+ /*
+ * If fixed counters are suggested in the constraints,
+ * but extended PEBS is not supported, empty constraint
+ * should be returned.
+ */
+ if ((c->idxmsk64 & ~PEBS_COUNTER_MASK) &&
+ !(x86_pmu.flags & PMU_FL_PEBS_ALL))
+ break;
event->hw.flags |= c->flags;
return c;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-03-24 0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
2026-03-24 0:40 ` [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS Dapeng Mi
@ 2026-03-24 0:40 ` Dapeng Mi
2026-03-24 0:40 ` [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
` (22 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:40 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
Modern PEBS hardware supports directly sampling XMM registers, then
large PEBS can be enabled for XMM registers just like other GPRs.
Reported-by: Xudong Hao <xudong.hao@intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 4768236c054b..5a2b1503b6a5 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4423,7 +4423,8 @@ static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event)
flags &= ~PERF_SAMPLE_REGS_USER;
if (event->attr.sample_regs_user & ~PEBS_GP_REGS)
flags &= ~PERF_SAMPLE_REGS_USER;
- if (event->attr.sample_regs_intr & ~PEBS_GP_REGS)
+ if (event->attr.sample_regs_intr &
+ ~(PEBS_GP_REGS | PERF_REG_EXTENDED_MASK))
flags &= ~PERF_SAMPLE_REGS_INTR;
return flags;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (2 preceding siblings ...)
2026-03-24 0:40 ` [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
@ 2026-03-24 0:40 ` Dapeng Mi
2026-03-24 0:40 ` [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
` (21 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:40 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
Currently, the intel_pmu_drain_pebs_icl() and intel_pmu_drain_arch_pebs()
helpers define many temporary variables. Upcoming patches will add new
fields like *ymm_regs and *zmm_regs to the x86_perf_regs structure to
support sampling for these SIMD registers. This would increase the stack
size consumed by these helpers, potentially triggering the warning:
"the frame size of 1048 bytes is larger than 1024 bytes
[-Wframe-larger-than=]".
To eliminate this warning, convert x86_perf_regs to per-cpu variables.
No functional changes are intended.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/ds.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 49af127bff68..52eb6eac5df3 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -3179,14 +3179,16 @@ __intel_pmu_handle_last_pebs_record(struct pt_regs *iregs,
}
+static DEFINE_PER_CPU(struct x86_perf_regs, x86_pebs_regs);
+
static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_data *data)
{
short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
void *last[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS];
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct debug_store *ds = cpuc->ds;
- struct x86_perf_regs perf_regs;
- struct pt_regs *regs = &perf_regs.regs;
+ struct x86_perf_regs *perf_regs = this_cpu_ptr(&x86_pebs_regs);
+ struct pt_regs *regs = &perf_regs->regs;
struct pebs_basic *basic;
void *base, *at, *top;
u64 mask;
@@ -3236,8 +3238,8 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
void *last[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS];
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
union arch_pebs_index index;
- struct x86_perf_regs perf_regs;
- struct pt_regs *regs = &perf_regs.regs;
+ struct x86_perf_regs *perf_regs = this_cpu_ptr(&x86_pebs_regs);
+ struct pt_regs *regs = &perf_regs->regs;
void *base, *at, *top;
u64 mask;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (3 preceding siblings ...)
2026-03-24 0:40 ` [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
@ 2026-03-24 0:40 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
` (20 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:40 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
Define default common __weak functions for perf_reg_value(),
perf_reg_validate(), perf_reg_abi() and perf_get_regs_user(). This helps
to eliminate the duplicated arch-specific definations.
No function changes intended.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/arm/kernel/perf_regs.c | 6 ------
arch/arm64/kernel/perf_regs.c | 6 ------
arch/csky/kernel/perf_regs.c | 6 ------
arch/loongarch/kernel/perf_regs.c | 6 ------
arch/mips/kernel/perf_regs.c | 6 ------
arch/parisc/kernel/perf_regs.c | 6 ------
arch/riscv/kernel/perf_regs.c | 6 ------
arch/x86/kernel/perf_regs.c | 6 ------
include/linux/perf_regs.h | 32 ++++++-------------------------
kernel/events/core.c | 22 +++++++++++++++++++++
10 files changed, 28 insertions(+), 74 deletions(-)
diff --git a/arch/arm/kernel/perf_regs.c b/arch/arm/kernel/perf_regs.c
index 0529f90395c9..d575a4c3ca56 100644
--- a/arch/arm/kernel/perf_regs.c
+++ b/arch/arm/kernel/perf_regs.c
@@ -31,9 +31,3 @@ u64 perf_reg_abi(struct task_struct *task)
return PERF_SAMPLE_REGS_ABI_32;
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c
index b4eece3eb17d..70e2f13f587f 100644
--- a/arch/arm64/kernel/perf_regs.c
+++ b/arch/arm64/kernel/perf_regs.c
@@ -98,9 +98,3 @@ u64 perf_reg_abi(struct task_struct *task)
return PERF_SAMPLE_REGS_ABI_64;
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/csky/kernel/perf_regs.c b/arch/csky/kernel/perf_regs.c
index 09b7f88a2d6a..94601f37b596 100644
--- a/arch/csky/kernel/perf_regs.c
+++ b/arch/csky/kernel/perf_regs.c
@@ -31,9 +31,3 @@ u64 perf_reg_abi(struct task_struct *task)
return PERF_SAMPLE_REGS_ABI_32;
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/loongarch/kernel/perf_regs.c b/arch/loongarch/kernel/perf_regs.c
index 263ac4ab5af6..8dd604f01745 100644
--- a/arch/loongarch/kernel/perf_regs.c
+++ b/arch/loongarch/kernel/perf_regs.c
@@ -45,9 +45,3 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return regs->regs[idx];
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/mips/kernel/perf_regs.c b/arch/mips/kernel/perf_regs.c
index e686780d1647..7736d3c5ebd2 100644
--- a/arch/mips/kernel/perf_regs.c
+++ b/arch/mips/kernel/perf_regs.c
@@ -60,9 +60,3 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return (s64)v; /* Sign extend if 32-bit. */
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/parisc/kernel/perf_regs.c b/arch/parisc/kernel/perf_regs.c
index 10a1a5f06a18..b9fe1f2fcb9b 100644
--- a/arch/parisc/kernel/perf_regs.c
+++ b/arch/parisc/kernel/perf_regs.c
@@ -53,9 +53,3 @@ u64 perf_reg_abi(struct task_struct *task)
return PERF_SAMPLE_REGS_ABI_64;
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/riscv/kernel/perf_regs.c b/arch/riscv/kernel/perf_regs.c
index fd304a248de6..3bba8deababb 100644
--- a/arch/riscv/kernel/perf_regs.c
+++ b/arch/riscv/kernel/perf_regs.c
@@ -35,9 +35,3 @@ u64 perf_reg_abi(struct task_struct *task)
#endif
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index 624703af80a1..81204cb7f723 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -100,12 +100,6 @@ u64 perf_reg_abi(struct task_struct *task)
return PERF_SAMPLE_REGS_ABI_32;
}
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
#else /* CONFIG_X86_64 */
#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \
(1ULL << PERF_REG_X86_ES) | \
diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h
index f632c5725f16..144bcc3ff19f 100644
--- a/include/linux/perf_regs.h
+++ b/include/linux/perf_regs.h
@@ -9,6 +9,12 @@ struct perf_regs {
struct pt_regs *regs;
};
+u64 perf_reg_value(struct pt_regs *regs, int idx);
+int perf_reg_validate(u64 mask);
+u64 perf_reg_abi(struct task_struct *task);
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs);
+
#ifdef CONFIG_HAVE_PERF_REGS
#include <asm/perf_regs.h>
@@ -16,35 +22,9 @@ struct perf_regs {
#define PERF_REG_EXTENDED_MASK 0
#endif
-u64 perf_reg_value(struct pt_regs *regs, int idx);
-int perf_reg_validate(u64 mask);
-u64 perf_reg_abi(struct task_struct *task);
-void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs);
#else
#define PERF_REG_EXTENDED_MASK 0
-static inline u64 perf_reg_value(struct pt_regs *regs, int idx)
-{
- return 0;
-}
-
-static inline int perf_reg_validate(u64 mask)
-{
- return mask ? -ENOSYS : 0;
-}
-
-static inline u64 perf_reg_abi(struct task_struct *task)
-{
- return PERF_SAMPLE_REGS_ABI_NONE;
-}
-
-static inline void perf_get_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
-{
- regs_user->regs = task_pt_regs(current);
- regs_user->abi = perf_reg_abi(current);
-}
#endif /* CONFIG_HAVE_PERF_REGS */
#endif /* _LINUX_PERF_REGS_H */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 5eeae8636996..eb1dea2b1b0e 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7731,6 +7731,28 @@ unsigned long perf_instruction_pointer(struct perf_event *event,
return perf_arch_instruction_pointer(regs);
}
+u64 __weak perf_reg_value(struct pt_regs *regs, int idx)
+{
+ return 0;
+}
+
+int __weak perf_reg_validate(u64 mask)
+{
+ return mask ? -ENOSYS : 0;
+}
+
+u64 __weak perf_reg_abi(struct task_struct *task)
+{
+ return PERF_SAMPLE_REGS_ABI_NONE;
+}
+
+void __weak perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs)
+{
+ regs_user->regs = task_pt_regs(current);
+ regs_user->abi = perf_reg_abi(current);
+}
+
static void
perf_output_sample_regs(struct perf_output_handle *handle,
struct pt_regs *regs, u64 mask)
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (4 preceding siblings ...)
2026-03-24 0:40 ` [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
` (19 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
More and more regs will be supported in the overflow, e.g., more vector
registers, SSP, etc. The generic pt_regs struct cannot store all of
them. Use a X86 specific x86_perf_regs instead.
The struct pt_regs *regs is still passed to x86_pmu_handle_irq(). There
is no functional change for the existing code.
AMD IBS's NMI handler doesn't utilize the static call
x86_pmu_handle_irq(). The x86_perf_regs struct doesn't apply to the AMD
IBS. It can be added separately later when AMD IBS supports more regs.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: use per-cpu x86_intr_regs to replace temporary variable in v6.
arch/x86/events/core.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 67883cf1d675..ad6cbc19592d 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1782,9 +1782,11 @@ void perf_put_guest_lvtpc(void)
EXPORT_SYMBOL_FOR_KVM(perf_put_guest_lvtpc);
#endif /* CONFIG_PERF_GUEST_MEDIATED_PMU */
+static DEFINE_PER_CPU(struct x86_perf_regs, x86_intr_regs);
static int
perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
{
+ struct x86_perf_regs *x86_regs = this_cpu_ptr(&x86_intr_regs);
u64 start_clock;
u64 finish_clock;
int ret;
@@ -1808,7 +1810,8 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
return NMI_DONE;
start_clock = sched_clock();
- ret = static_call(x86_pmu_handle_irq)(regs);
+ x86_regs->regs = *regs;
+ ret = static_call(x86_pmu_handle_irq)(&x86_regs->regs);
finish_clock = sched_clock();
perf_sample_event_took(finish_clock - start_clock);
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data()
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (5 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-25 5:18 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
` (18 subsequent siblings)
25 siblings, 1 reply; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
The current perf/x86 implementation uses the generic functions
perf_sample_regs_user() and perf_sample_regs_intr() to set up registers
data for sampling records. While this approach works for general
registers, it falls short when adding sampling support for SIMD and APX
eGPRs registers on x86 platforms.
To address this, we introduce the x86-specific function
x86_pmu_setup_regs_data() for setting up register data on x86 platforms.
At present, x86_pmu_setup_regs_data() mirrors the logic of the generic
functions perf_sample_regs_user() and perf_sample_regs_intr().
Subsequent patches will introduce x86-specific enhancements.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 33 +++++++++++++++++++++++++++++++++
arch/x86/events/intel/ds.c | 9 ++++++---
arch/x86/events/perf_event.h | 4 ++++
3 files changed, 43 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index ad6cbc19592d..0a6c51e86e9b 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1699,6 +1699,39 @@ static void x86_pmu_del(struct perf_event *event, int flags)
static_call_cond(x86_pmu_del)(event);
}
+void x86_pmu_setup_regs_data(struct perf_event *event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
+{
+ struct perf_event_attr *attr = &event->attr;
+ u64 sample_type = attr->sample_type;
+
+ if (sample_type & PERF_SAMPLE_REGS_USER) {
+ if (user_mode(regs)) {
+ data->regs_user.abi = perf_reg_abi(current);
+ data->regs_user.regs = regs;
+ } else if (!(current->flags & PF_KTHREAD)) {
+ perf_get_regs_user(&data->regs_user, regs);
+ } else {
+ data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
+ data->regs_user.regs = NULL;
+ }
+ data->dyn_size += sizeof(u64);
+ if (data->regs_user.regs)
+ data->dyn_size += hweight64(attr->sample_regs_user) * sizeof(u64);
+ data->sample_flags |= PERF_SAMPLE_REGS_USER;
+ }
+
+ if (sample_type & PERF_SAMPLE_REGS_INTR) {
+ data->regs_intr.regs = regs;
+ data->regs_intr.abi = perf_reg_abi(current);
+ data->dyn_size += sizeof(u64);
+ if (data->regs_intr.regs)
+ data->dyn_size += hweight64(attr->sample_regs_intr) * sizeof(u64);
+ data->sample_flags |= PERF_SAMPLE_REGS_INTR;
+ }
+}
+
int x86_pmu_handle_irq(struct pt_regs *regs)
{
struct perf_sample_data data;
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 52eb6eac5df3..b045297c02d0 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2450,6 +2450,7 @@ static inline void __setup_pebs_basic_group(struct perf_event *event,
}
static inline void __setup_pebs_gpr_group(struct perf_event *event,
+ struct perf_sample_data *data,
struct pt_regs *regs,
struct pebs_gprs *gprs,
u64 sample_type)
@@ -2459,8 +2460,10 @@ static inline void __setup_pebs_gpr_group(struct perf_event *event,
regs->flags &= ~PERF_EFLAGS_EXACT;
}
- if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
+ if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
adaptive_pebs_save_regs(regs, gprs);
+ x86_pmu_setup_regs_data(event, data, regs);
+ }
}
static inline void __setup_pebs_meminfo_group(struct perf_event *event,
@@ -2553,7 +2556,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
gprs = next_record;
next_record = gprs + 1;
- __setup_pebs_gpr_group(event, regs, gprs, sample_type);
+ __setup_pebs_gpr_group(event, data, regs, gprs, sample_type);
}
if (format_group & PEBS_DATACFG_MEMINFO) {
@@ -2677,7 +2680,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
gprs = next_record;
next_record = gprs + 1;
- __setup_pebs_gpr_group(event, regs,
+ __setup_pebs_gpr_group(event, data, regs,
(struct pebs_gprs *)gprs,
sample_type);
}
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad87d3c8b2c..39c41947c70d 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1306,6 +1306,10 @@ void x86_pmu_enable_event(struct perf_event *event);
int x86_pmu_handle_irq(struct pt_regs *regs);
+void x86_pmu_setup_regs_data(struct perf_event *event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs);
+
void x86_pmu_show_pmu_cap(struct pmu *pmu);
static inline int x86_pmu_num_counters(struct pmu *pmu)
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (6 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
` (17 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
Add xsaves_nmi() to save supported xsave states in NMI handler.
This function is similar to xsaves(), but should only be called within
a NMI handler. This function returns the actual register contents at
the moment the NMI occurs.
Currently the perf subsystem is the sole user of this helper. It uses
this function to snapshot SIMD (XMM/YMM/ZMM) and APX eGPRs registers
which would be added in subsequent patches.
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/include/asm/fpu/xstate.h | 1 +
arch/x86/kernel/fpu/xstate.c | 23 +++++++++++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h
index 7a7dc9d56027..38fa8ff26559 100644
--- a/arch/x86/include/asm/fpu/xstate.h
+++ b/arch/x86/include/asm/fpu/xstate.h
@@ -110,6 +110,7 @@ int xfeature_size(int xfeature_nr);
void xsaves(struct xregs_state *xsave, u64 mask);
void xrstors(struct xregs_state *xsave, u64 mask);
+void xsaves_nmi(struct xregs_state *xsave, u64 mask);
int xfd_enable_feature(u64 xfd_err);
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 76153dfb58c9..39e5f9e79a4c 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -1475,6 +1475,29 @@ void xrstors(struct xregs_state *xstate, u64 mask)
WARN_ON_ONCE(err);
}
+/**
+ * xsaves_nmi - Save selected components to a kernel xstate buffer in NMI
+ * @xstate: Pointer to the buffer
+ * @mask: Feature mask to select the components to save
+ *
+ * This function is similar to xsaves(), but should only be called within
+ * a NMI handler. This function returns the actual register contents at
+ * the moment the NMI occurs.
+ *
+ * Currently, the perf subsystem is the sole user of this helper. It uses
+ * the function to snapshot SIMD (XMM/YMM/ZMM) and APX eGPRs registers.
+ */
+void xsaves_nmi(struct xregs_state *xstate, u64 mask)
+{
+ int err;
+
+ if (!in_nmi())
+ return;
+
+ XSTATE_OP(XSAVES, xstate, (u32)mask, (u32)(mask >> 32), err);
+ WARN_ON_ONCE(err);
+}
+
#if IS_ENABLED(CONFIG_KVM)
void fpstate_clear_xstate_component(struct fpstate *fpstate, unsigned int xfeature)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (7 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
` (16 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
Following Peter and Dave's suggestion, Ensure that the TIF_NEED_FPU_LOAD
flag is always set after saving the FPU state. This guarantees that the
user space FPU state has been saved whenever the TIF_NEED_FPU_LOAD flag
is set.
A subsequent patch will verify if the user space FPU state can be
retrieved from the saved task FPU state in the NMI context by checking
the TIF_NEED_FPU_LOAD flag.
Please check the below link to get more background about the suggestion.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lore.kernel.org/all/20251204154721.GB2619703@noisy.programming.kicks-ass.net/
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: Add wrapper helper update_fpu_state_and_flag() and corresponding
comments.
arch/x86/include/asm/fpu/sched.h | 5 +++--
arch/x86/kernel/fpu/core.c | 27 ++++++++++++++++++++-------
2 files changed, 23 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/fpu/sched.h b/arch/x86/include/asm/fpu/sched.h
index 89004f4ca208..dcb2fa5f06d6 100644
--- a/arch/x86/include/asm/fpu/sched.h
+++ b/arch/x86/include/asm/fpu/sched.h
@@ -10,6 +10,8 @@
#include <asm/trace/fpu.h>
extern void save_fpregs_to_fpstate(struct fpu *fpu);
+extern void update_fpu_state_and_flag(struct fpu *fpu,
+ struct task_struct *task);
extern void fpu__drop(struct task_struct *tsk);
extern int fpu_clone(struct task_struct *dst, u64 clone_flags, bool minimal,
unsigned long shstk_addr);
@@ -36,8 +38,7 @@ static inline void switch_fpu(struct task_struct *old, int cpu)
!(old->flags & (PF_KTHREAD | PF_USER_WORKER))) {
struct fpu *old_fpu = x86_task_fpu(old);
- set_tsk_thread_flag(old, TIF_NEED_FPU_LOAD);
- save_fpregs_to_fpstate(old_fpu);
+ update_fpu_state_and_flag(old_fpu, old);
/*
* The save operation preserved register state, so the
* fpu_fpregs_owner_ctx is still @old_fpu. Store the
diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
index 608983806fd7..48d1ab50a961 100644
--- a/arch/x86/kernel/fpu/core.c
+++ b/arch/x86/kernel/fpu/core.c
@@ -213,6 +213,19 @@ void restore_fpregs_from_fpstate(struct fpstate *fpstate, u64 mask)
}
}
+/*
+ * Save the FPU register state in fpu->fpstate->regs and set
+ * TIF_NEED_FPU_LOAD subsequently.
+ *
+ * Must be called with fpregs_lock() held, ensuring flag
+ * TIF_NEED_FPU_LOAD is set last.
+ */
+void update_fpu_state_and_flag(struct fpu *fpu, struct task_struct *task)
+{
+ save_fpregs_to_fpstate(fpu);
+ set_tsk_thread_flag(task, TIF_NEED_FPU_LOAD);
+}
+
void fpu_reset_from_exception_fixup(void)
{
restore_fpregs_from_fpstate(&init_fpstate, XFEATURE_MASK_FPSTATE);
@@ -379,17 +392,19 @@ int fpu_swap_kvm_fpstate(struct fpu_guest *guest_fpu, bool enter_guest)
fpregs_lock();
if (!cur_fps->is_confidential && !test_thread_flag(TIF_NEED_FPU_LOAD))
- save_fpregs_to_fpstate(fpu);
+ update_fpu_state_and_flag(fpu, current);
/* Swap fpstate */
if (enter_guest) {
- fpu->__task_fpstate = cur_fps;
+ WRITE_ONCE(fpu->__task_fpstate, cur_fps);
+ barrier();
fpu->fpstate = guest_fps;
guest_fps->in_use = true;
} else {
guest_fps->in_use = false;
fpu->fpstate = fpu->__task_fpstate;
- fpu->__task_fpstate = NULL;
+ barrier();
+ WRITE_ONCE(fpu->__task_fpstate, NULL);
}
cur_fps = fpu->fpstate;
@@ -481,10 +496,8 @@ void kernel_fpu_begin_mask(unsigned int kfpu_mask)
this_cpu_write(kernel_fpu_allowed, false);
if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER)) &&
- !test_thread_flag(TIF_NEED_FPU_LOAD)) {
- set_thread_flag(TIF_NEED_FPU_LOAD);
- save_fpregs_to_fpstate(x86_task_fpu(current));
- }
+ !test_thread_flag(TIF_NEED_FPU_LOAD))
+ update_fpu_state_and_flag(x86_task_fpu(current), current);
__cpu_invalidate_fpregs_state();
/* Put sane initial values into the control registers. */
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (8 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
` (15 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
The has_extended_regs() function will be utilized in ARCH-specific code.
To facilitate this, move it to header file perf_event.h
Additionally, the function is renamed to event_has_extended_regs() which
aligns with the existing naming conventions.
No functional change intended.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
include/linux/perf_event.h | 8 ++++++++
kernel/events/core.c | 8 +-------
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 48d851fbd8ea..e8b0d8e2d2af 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1534,6 +1534,14 @@ perf_event__output_id_sample(struct perf_event *event,
extern void
perf_log_lost_samples(struct perf_event *event, u64 lost);
+static inline bool event_has_extended_regs(struct perf_event *event)
+{
+ struct perf_event_attr *attr = &event->attr;
+
+ return (attr->sample_regs_user & PERF_REG_EXTENDED_MASK) ||
+ (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK);
+}
+
static inline bool event_has_any_exclude_flag(struct perf_event *event)
{
struct perf_event_attr *attr = &event->attr;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index eb1dea2b1b0e..7558bc5b1e73 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -12978,12 +12978,6 @@ int perf_pmu_unregister(struct pmu *pmu)
}
EXPORT_SYMBOL_GPL(perf_pmu_unregister);
-static inline bool has_extended_regs(struct perf_event *event)
-{
- return (event->attr.sample_regs_user & PERF_REG_EXTENDED_MASK) ||
- (event->attr.sample_regs_intr & PERF_REG_EXTENDED_MASK);
-}
-
static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
{
struct perf_event_context *ctx = NULL;
@@ -13018,7 +13012,7 @@ static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
goto err_pmu;
if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) &&
- has_extended_regs(event)) {
+ event_has_extended_regs(event)) {
ret = -EOPNOTSUPP;
goto err_destroy;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (9 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-25 7:30 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
` (14 subsequent siblings)
25 siblings, 1 reply; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi, Kan Liang
Previously, XMM register sampling was only available for PEBS events
starting from Icelake. Currently the support is now extended to non-PEBS
events by utilizing the xsaves instruction, thereby completing the
feature set.
To implement this, a 64-byte aligned buffer is required. A per-CPU
ext_regs_buf is introduced to store SIMD and other registers, with an
approximate size of 2K. The buffer is allocated using kzalloc_node(),
ensuring natural and 64-byte alignment for all kmalloc() allocations
with powers of 2.
XMM sampling for non-PEBS events is supported in the REGS_INTR case.
Support for REGS_USER will be added in a subsequent patch. For PEBS
events, XMM register sampling data is directly retrieved from PEBS
records.
Future support for additional vector registers (YMM/ZMM/OPMASK) is
planned. An `ext_regs_mask` is added to track the supported vector
register groups.
Co-developed-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: Optimize and simplify x86_pmu_sample_xregs(), etc. No functional
change.
arch/x86/events/core.c | 139 +++++++++++++++++++++++++++---
arch/x86/events/intel/core.c | 31 ++++++-
arch/x86/events/intel/ds.c | 20 +++--
arch/x86/events/perf_event.h | 11 ++-
arch/x86/include/asm/fpu/xstate.h | 2 +
arch/x86/include/asm/perf_event.h | 5 +-
arch/x86/kernel/fpu/xstate.c | 2 +-
7 files changed, 185 insertions(+), 25 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 0a6c51e86e9b..22965a8a22b3 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -410,6 +410,45 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf_event *event)
return x86_pmu_extra_regs(val, event);
}
+static DEFINE_PER_CPU(struct xregs_state *, ext_regs_buf);
+
+static void release_ext_regs_buffers(void)
+{
+ int cpu;
+
+ if (!x86_pmu.ext_regs_mask)
+ return;
+
+ for_each_possible_cpu(cpu) {
+ kfree(per_cpu(ext_regs_buf, cpu));
+ per_cpu(ext_regs_buf, cpu) = NULL;
+ }
+}
+
+static void reserve_ext_regs_buffers(void)
+{
+ bool compacted = cpu_feature_enabled(X86_FEATURE_XCOMPACTED);
+ unsigned int size;
+ int cpu;
+
+ if (!x86_pmu.ext_regs_mask)
+ return;
+
+ size = xstate_calculate_size(x86_pmu.ext_regs_mask, compacted);
+
+ for_each_possible_cpu(cpu) {
+ per_cpu(ext_regs_buf, cpu) = kzalloc_node(size, GFP_KERNEL,
+ cpu_to_node(cpu));
+ if (!per_cpu(ext_regs_buf, cpu))
+ goto err;
+ }
+
+ return;
+
+err:
+ release_ext_regs_buffers();
+}
+
int x86_reserve_hardware(void)
{
int err = 0;
@@ -422,6 +461,7 @@ int x86_reserve_hardware(void)
} else {
reserve_ds_buffers();
reserve_lbr_buffers();
+ reserve_ext_regs_buffers();
}
}
if (!err)
@@ -438,6 +478,7 @@ void x86_release_hardware(void)
release_pmc_hardware();
release_ds_buffers();
release_lbr_buffers();
+ release_ext_regs_buffers();
mutex_unlock(&pmc_reserve_mutex);
}
}
@@ -655,18 +696,23 @@ int x86_pmu_hw_config(struct perf_event *event)
return -EINVAL;
}
- /* sample_regs_user never support XMM registers */
- if (unlikely(event->attr.sample_regs_user & PERF_REG_EXTENDED_MASK))
- return -EINVAL;
- /*
- * Besides the general purpose registers, XMM registers may
- * be collected in PEBS on some platforms, e.g. Icelake
- */
- if (unlikely(event->attr.sample_regs_intr & PERF_REG_EXTENDED_MASK)) {
- if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
- return -EINVAL;
+ if (event->attr.sample_type & PERF_SAMPLE_REGS_INTR) {
+ /*
+ * Besides the general purpose registers, XMM registers may
+ * be collected as well.
+ */
+ if (event_has_extended_regs(event)) {
+ if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
+ return -EINVAL;
+ }
+ }
- if (!event->attr.precise_ip)
+ if (event->attr.sample_type & PERF_SAMPLE_REGS_USER) {
+ /*
+ * Currently XMM registers sampling for REGS_USER is not
+ * supported yet.
+ */
+ if (event_has_extended_regs(event))
return -EINVAL;
}
@@ -1699,9 +1745,9 @@ static void x86_pmu_del(struct perf_event *event, int flags)
static_call_cond(x86_pmu_del)(event);
}
-void x86_pmu_setup_regs_data(struct perf_event *event,
- struct perf_sample_data *data,
- struct pt_regs *regs)
+static void x86_pmu_setup_gpregs_data(struct perf_event *event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs)
{
struct perf_event_attr *attr = &event->attr;
u64 sample_type = attr->sample_type;
@@ -1732,6 +1778,71 @@ void x86_pmu_setup_regs_data(struct perf_event *event,
}
}
+inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
+{
+ struct x86_perf_regs *perf_regs = container_of(regs, struct x86_perf_regs, regs);
+
+ perf_regs->xmm_regs = NULL;
+}
+
+static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
+ struct xregs_state *xsave, u64 bitmap)
+{
+ u64 mask;
+
+ if (!xsave)
+ return;
+
+ /* Filtered by what XSAVE really gives */
+ mask = bitmap & xsave->header.xfeatures;
+
+ if (mask & XFEATURE_MASK_SSE)
+ perf_regs->xmm_space = xsave->i387.xmm_space;
+}
+
+static void x86_pmu_sample_xregs(struct perf_event *event,
+ struct perf_sample_data *data,
+ u64 ignore_mask)
+{
+ struct xregs_state *xsave = per_cpu(ext_regs_buf, smp_processor_id());
+ u64 sample_type = event->attr.sample_type;
+ struct x86_perf_regs *perf_regs;
+ u64 intr_mask = 0;
+ u64 mask = 0;
+
+ if (WARN_ON_ONCE(!xsave))
+ return;
+
+ if (event_has_extended_regs(event))
+ mask |= XFEATURE_MASK_SSE;
+
+ mask &= x86_pmu.ext_regs_mask;
+
+ if ((sample_type & PERF_SAMPLE_REGS_INTR) && data->regs_intr.abi)
+ intr_mask = mask & ~ignore_mask;
+
+ if (intr_mask) {
+ perf_regs = container_of(data->regs_intr.regs,
+ struct x86_perf_regs, regs);
+ xsave->header.xfeatures = 0;
+ xsaves_nmi(xsave, mask);
+ x86_pmu_update_xregs(perf_regs, xsave, intr_mask);
+ }
+}
+
+void x86_pmu_setup_regs_data(struct perf_event *event,
+ struct perf_sample_data *data,
+ struct pt_regs *regs,
+ u64 ignore_mask)
+{
+ x86_pmu_setup_gpregs_data(event, data, regs);
+ /*
+ * ignore_mask indicates the PEBS sampled extended regs
+ * which are unnecessary to sample again.
+ */
+ x86_pmu_sample_xregs(event, data, ignore_mask);
+}
+
int x86_pmu_handle_irq(struct pt_regs *regs)
{
struct perf_sample_data data;
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 5a2b1503b6a5..5772dcc3bcbd 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3649,6 +3649,9 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
if (has_branch_stack(event))
intel_pmu_lbr_save_brstack(&data, cpuc, event);
+ x86_pmu_clear_perf_regs(regs);
+ x86_pmu_setup_regs_data(event, &data, regs, 0);
+
perf_event_overflow(event, &data, regs);
}
@@ -5884,8 +5887,32 @@ static inline void __intel_update_large_pebs_flags(struct pmu *pmu)
}
}
-#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX_FIXED))
+static void intel_extended_regs_init(struct pmu *pmu)
+{
+ struct pmu *dest_pmu = pmu ? pmu : x86_get_pmu(smp_processor_id());
+
+ /*
+ * Extend the vector registers support to non-PEBS.
+ * The feature is limited to newer Intel machines with
+ * PEBS V4+ or archPerfmonExt (0x23) enabled for now.
+ * In theory, the vector registers can be retrieved as
+ * long as the CPU supports. The support for the old
+ * generations may be added later if there is a
+ * requirement.
+ * Only support the extension when XSAVES is available.
+ */
+ if (!boot_cpu_has(X86_FEATURE_XSAVES))
+ return;
+
+ if (!boot_cpu_has(X86_FEATURE_XMM) ||
+ !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ return;
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_SSE;
+ dest_pmu->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
+}
+
+#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX_FIXED))
static void update_pmu_cap(struct pmu *pmu)
{
unsigned int eax, ebx, ecx, edx;
@@ -5949,6 +5976,8 @@ static void update_pmu_cap(struct pmu *pmu)
/* Perf Metric (Bit 15) and PEBS via PT (Bit 16) are hybrid enumeration */
rdmsrq(MSR_IA32_PERF_CAPABILITIES, hybrid(pmu, intel_cap).capabilities);
}
+
+ intel_extended_regs_init(pmu);
}
static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index b045297c02d0..74a41dae8a62 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1743,8 +1743,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
if (gprs || (attr->precise_ip < 2) || tsx_weight)
pebs_data_cfg |= PEBS_DATACFG_GP;
- if ((sample_type & PERF_SAMPLE_REGS_INTR) &&
- (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK))
+ if (event_has_extended_regs(event))
pebs_data_cfg |= PEBS_DATACFG_XMMS;
if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
@@ -2460,10 +2459,8 @@ static inline void __setup_pebs_gpr_group(struct perf_event *event,
regs->flags &= ~PERF_EFLAGS_EXACT;
}
- if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
+ if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
adaptive_pebs_save_regs(regs, gprs);
- x86_pmu_setup_regs_data(event, data, regs);
- }
}
static inline void __setup_pebs_meminfo_group(struct perf_event *event,
@@ -2521,6 +2518,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
struct pebs_meminfo *meminfo = NULL;
struct pebs_gprs *gprs = NULL;
struct x86_perf_regs *perf_regs;
+ u64 ignore_mask = 0;
u64 format_group;
u16 retire;
@@ -2528,7 +2526,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
return;
perf_regs = container_of(regs, struct x86_perf_regs, regs);
- perf_regs->xmm_regs = NULL;
+ x86_pmu_clear_perf_regs(regs);
format_group = basic->format_group;
@@ -2575,6 +2573,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
if (format_group & PEBS_DATACFG_XMMS) {
struct pebs_xmm *xmm = next_record;
+ ignore_mask |= XFEATURE_MASK_SSE;
next_record = xmm + 1;
perf_regs->xmm_regs = xmm->xmm;
}
@@ -2613,6 +2612,8 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
next_record += nr * sizeof(u64);
}
+ x86_pmu_setup_regs_data(event, data, regs, ignore_mask);
+
WARN_ONCE(next_record != __pebs + basic->format_size,
"PEBS record size %u, expected %llu, config %llx\n",
basic->format_size,
@@ -2638,6 +2639,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
struct arch_pebs_aux *meminfo = NULL;
struct arch_pebs_gprs *gprs = NULL;
struct x86_perf_regs *perf_regs;
+ u64 ignore_mask = 0;
void *next_record;
void *at = __pebs;
@@ -2645,7 +2647,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
return;
perf_regs = container_of(regs, struct x86_perf_regs, regs);
- perf_regs->xmm_regs = NULL;
+ x86_pmu_clear_perf_regs(regs);
__setup_perf_sample_data(event, iregs, data);
@@ -2700,6 +2702,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
next_record += sizeof(struct arch_pebs_xer_header);
+ ignore_mask |= XFEATURE_MASK_SSE;
xmm = next_record;
perf_regs->xmm_regs = xmm->xmm;
next_record = xmm + 1;
@@ -2747,6 +2750,8 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
at = at + header->size;
goto again;
}
+
+ x86_pmu_setup_regs_data(event, data, regs, ignore_mask);
}
static inline void *
@@ -3409,6 +3414,7 @@ static void __init intel_ds_pebs_init(void)
x86_pmu.flags |= PMU_FL_PEBS_ALL;
x86_pmu.pebs_capable = ~0ULL;
pebs_qual = "-baseline";
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_SSE;
x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
} else {
/* Only basic record supported */
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 39c41947c70d..a5e5bffb711e 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1020,6 +1020,12 @@ struct x86_pmu {
struct extra_reg *extra_regs;
unsigned int flags;
+ /*
+ * Extended regs, e.g., vector registers
+ * Utilize the same format as the XFEATURE_MASK_*
+ */
+ u64 ext_regs_mask;
+
/*
* Intel host/guest support (KVM)
*/
@@ -1306,9 +1312,12 @@ void x86_pmu_enable_event(struct perf_event *event);
int x86_pmu_handle_irq(struct pt_regs *regs);
+void x86_pmu_clear_perf_regs(struct pt_regs *regs);
+
void x86_pmu_setup_regs_data(struct perf_event *event,
struct perf_sample_data *data,
- struct pt_regs *regs);
+ struct pt_regs *regs,
+ u64 ignore_mask);
void x86_pmu_show_pmu_cap(struct pmu *pmu);
diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h
index 38fa8ff26559..19dec5f0b1c7 100644
--- a/arch/x86/include/asm/fpu/xstate.h
+++ b/arch/x86/include/asm/fpu/xstate.h
@@ -112,6 +112,8 @@ void xsaves(struct xregs_state *xsave, u64 mask);
void xrstors(struct xregs_state *xsave, u64 mask);
void xsaves_nmi(struct xregs_state *xsave, u64 mask);
+unsigned int xstate_calculate_size(u64 xfeatures, bool compacted);
+
int xfd_enable_feature(u64 xfd_err);
#ifdef CONFIG_X86_64
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 752cb319d5ea..e47a963a7cf0 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -726,7 +726,10 @@ extern void perf_events_lapic_init(void);
struct pt_regs;
struct x86_perf_regs {
struct pt_regs regs;
- u64 *xmm_regs;
+ union {
+ u64 *xmm_regs;
+ u32 *xmm_space; /* for xsaves */
+ };
};
extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 39e5f9e79a4c..93631f7a638e 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -587,7 +587,7 @@ static bool __init check_xstate_against_struct(int nr)
return true;
}
-static unsigned int xstate_calculate_size(u64 xfeatures, bool compacted)
+unsigned int xstate_calculate_size(u64 xfeatures, bool compacted)
{
unsigned int topmost = fls64(xfeatures) - 1;
unsigned int offset, i;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (10 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-25 7:58 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
` (13 subsequent siblings)
25 siblings, 1 reply; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi, Kan Liang
This patch adds support for XMM register sampling in the REGS_USER case.
To handle simultaneous sampling of XMM registers for both REGS_INTR and
REGS_USER cases, a per-CPU `x86_user_regs` is introduced to store
REGS_USER-specific XMM registers. This prevents REGS_USER-specific XMM
register data from being overwritten by REGS_INTR-specific data if they
share the same `x86_perf_regs` structure.
To sample user-space XMM registers, the `x86_pmu_update_user_ext_regs()`
helper function is added. It checks if the `TIF_NEED_FPU_LOAD` flag is
set. If so, the user-space XMM register data can be directly retrieved
from the cached task FPU state, as the corresponding hardware registers
have been cleared or switched to kernel-space data. Otherwise, the data
must be read from the hardware registers using the `xsaves` instruction.
For PEBS events, `x86_pmu_update_user_ext_regs()` checks if the
PEBS-sampled XMM register data belongs to user-space. If so, no further
action is needed. Otherwise, the user-space XMM register data needs to be
re-sampled using the same method as for non-PEBS events.
Co-developed-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 95 ++++++++++++++++++++++++++++++++++++------
1 file changed, 82 insertions(+), 13 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 22965a8a22b3..a5643c875190 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -696,7 +696,7 @@ int x86_pmu_hw_config(struct perf_event *event)
return -EINVAL;
}
- if (event->attr.sample_type & PERF_SAMPLE_REGS_INTR) {
+ if (event->attr.sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
/*
* Besides the general purpose registers, XMM registers may
* be collected as well.
@@ -707,15 +707,6 @@ int x86_pmu_hw_config(struct perf_event *event)
}
}
- if (event->attr.sample_type & PERF_SAMPLE_REGS_USER) {
- /*
- * Currently XMM registers sampling for REGS_USER is not
- * supported yet.
- */
- if (event_has_extended_regs(event))
- return -EINVAL;
- }
-
return x86_setup_perfctr(event);
}
@@ -1745,6 +1736,28 @@ static void x86_pmu_del(struct perf_event *event, int flags)
static_call_cond(x86_pmu_del)(event);
}
+/*
+ * When both PERF_SAMPLE_REGS_INTR and PERF_SAMPLE_REGS_USER are set,
+ * an additional x86_perf_regs is required to save user-space registers.
+ * Without this, user-space register data may be overwritten by kernel-space
+ * registers.
+ */
+static DEFINE_PER_CPU(struct x86_perf_regs, x86_user_regs);
+static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data,
+ struct pt_regs *regs)
+{
+ struct x86_perf_regs *x86_regs_user = this_cpu_ptr(&x86_user_regs);
+ struct perf_regs regs_user;
+
+ perf_get_regs_user(®s_user, regs);
+ data->regs_user.abi = regs_user.abi;
+ if (regs_user.regs) {
+ x86_regs_user->regs = *regs_user.regs;
+ data->regs_user.regs = &x86_regs_user->regs;
+ } else
+ data->regs_user.regs = NULL;
+}
+
static void x86_pmu_setup_gpregs_data(struct perf_event *event,
struct perf_sample_data *data,
struct pt_regs *regs)
@@ -1757,7 +1770,14 @@ static void x86_pmu_setup_gpregs_data(struct perf_event *event,
data->regs_user.abi = perf_reg_abi(current);
data->regs_user.regs = regs;
} else if (!(current->flags & PF_KTHREAD)) {
- perf_get_regs_user(&data->regs_user, regs);
+ /*
+ * It cannot guarantee that the kernel will never
+ * touch the registers outside of the pt_regs,
+ * especially when more and more registers
+ * (e.g., SIMD, eGPR) are added. The live data
+ * cannot be used.
+ */
+ x86_pmu_perf_get_regs_user(data, regs);
} else {
data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
data->regs_user.regs = NULL;
@@ -1800,6 +1820,43 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
perf_regs->xmm_space = xsave->i387.xmm_space;
}
+/*
+ * This function retrieves cached user-space fpu registers (XMM/YMM/ZMM).
+ * If TIF_NEED_FPU_LOAD is set, it indicates that the user-space FPU state
+ * is cached. Otherwise, the data should be read directly from the hardware
+ * registers.
+ */
+static inline u64 x86_pmu_update_user_xregs(struct perf_sample_data *data,
+ u64 mask, u64 ignore_mask)
+{
+ struct x86_perf_regs *perf_regs;
+ struct xregs_state *xsave;
+ struct fpu *fpu;
+ struct fpstate *fps;
+
+ if (data->regs_user.abi == PERF_SAMPLE_REGS_ABI_NONE)
+ return 0;
+
+ if (test_thread_flag(TIF_NEED_FPU_LOAD)) {
+ perf_regs = container_of(data->regs_user.regs,
+ struct x86_perf_regs, regs);
+ fpu = x86_task_fpu(current);
+ /*
+ * If __task_fpstate is set, it holds the right pointer,
+ * otherwise fpstate will.
+ */
+ fps = READ_ONCE(fpu->__task_fpstate);
+ if (!fps)
+ fps = fpu->fpstate;
+ xsave = &fps->regs.xsave;
+
+ x86_pmu_update_xregs(perf_regs, xsave, mask);
+ return 0;
+ }
+
+ return mask & ~ignore_mask;
+}
+
static void x86_pmu_sample_xregs(struct perf_event *event,
struct perf_sample_data *data,
u64 ignore_mask)
@@ -1807,6 +1864,7 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
struct xregs_state *xsave = per_cpu(ext_regs_buf, smp_processor_id());
u64 sample_type = event->attr.sample_type;
struct x86_perf_regs *perf_regs;
+ u64 user_mask = 0;
u64 intr_mask = 0;
u64 mask = 0;
@@ -1817,15 +1875,26 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
mask |= XFEATURE_MASK_SSE;
mask &= x86_pmu.ext_regs_mask;
+ if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
+ user_mask = x86_pmu_update_user_xregs(data, mask, ignore_mask);
if ((sample_type & PERF_SAMPLE_REGS_INTR) && data->regs_intr.abi)
intr_mask = mask & ~ignore_mask;
+ if (user_mask | intr_mask) {
+ xsave->header.xfeatures = 0;
+ xsaves_nmi(xsave, user_mask | intr_mask);
+ }
+
+ if (user_mask) {
+ perf_regs = container_of(data->regs_user.regs,
+ struct x86_perf_regs, regs);
+ x86_pmu_update_xregs(perf_regs, xsave, user_mask);
+ }
+
if (intr_mask) {
perf_regs = container_of(data->regs_intr.regs,
struct x86_perf_regs, regs);
- xsave->header.xfeatures = 0;
- xsaves_nmi(xsave, mask);
x86_pmu_update_xregs(perf_regs, xsave, intr_mask);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 13/24] perf: Add sampling support for SIMD registers
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (11 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-25 8:44 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
` (12 subsequent siblings)
25 siblings, 1 reply; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
Users may be interested in sampling SIMD registers during profiling.
The current sample_regs_* structure does not have sufficient space
for all SIMD registers.
To address this, new attribute fields sample_simd_{pred,vec}_reg_* are
added to struct perf_event_attr to represent the SIMD registers that are
expected to be sampled.
Currently, the perf/x86 code supports XMM registers in sample_regs_*.
To unify the configuration of SIMD registers and ensure a consistent
method for configuring XMM and other SIMD registers, a new event
attribute field, sample_simd_regs_enabled, is introduced. When
sample_simd_regs_enabled is set, it indicates that all SIMD registers,
including XMM, will be represented by the newly introduced
sample_simd_{pred|vec}_reg_* fields. The original XMM space in
sample_regs_* is reserved for future uses.
Since SIMD registers are wider than 64 bits, a new output format is
introduced. The number and width of SIMD registers are dumped first,
followed by the register values. The number and width are based on the
user's configuration. If they differ (e.g., on ARM), an ARCH-specific
perf_output_sample_simd_regs function can be implemented separately.
A new ABI, PERF_SAMPLE_REGS_ABI_SIMD, is added to indicate the new format.
The enum perf_sample_regs_abi is now a bitmap. This change should not
impact existing tools, as the version and bitmap remain the same for
values 1 and 2.
Additionally, two new __weak functions are introduced:
- perf_simd_reg_value(): Retrieves the value of the requested SIMD
register.
- perf_simd_reg_validate(): Validates the configuration of the SIMD
registers.
A new flag, PERF_PMU_CAP_SIMD_REGS, is added to indicate that the PMU
supports SIMD register dumping. An error is generated if
sample_simd_{pred|vec}_reg_* is mistakenly set for a PMU that does not
support this capability.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: Add macro word_for_each_set_bit() to simplify u64 set-bit iteration.
include/linux/perf_event.h | 8 +++
include/linux/perf_regs.h | 4 ++
include/uapi/linux/perf_event.h | 50 ++++++++++++++--
kernel/events/core.c | 102 +++++++++++++++++++++++++++++---
| 3 +-
5 files changed, 153 insertions(+), 14 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index e8b0d8e2d2af..137d6e4a3403 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -306,6 +306,7 @@ struct perf_event_pmu_context;
#define PERF_PMU_CAP_AUX_PAUSE 0x0200
#define PERF_PMU_CAP_AUX_PREFER_LARGE 0x0400
#define PERF_PMU_CAP_MEDIATED_VPMU 0x0800
+#define PERF_PMU_CAP_SIMD_REGS 0x1000
/**
* pmu::scope
@@ -1534,6 +1535,13 @@ perf_event__output_id_sample(struct perf_event *event,
extern void
perf_log_lost_samples(struct perf_event *event, u64 lost);
+static inline bool event_has_simd_regs(struct perf_event *event)
+{
+ struct perf_event_attr *attr = &event->attr;
+
+ return attr->sample_simd_regs_enabled != 0;
+}
+
static inline bool event_has_extended_regs(struct perf_event *event)
{
struct perf_event_attr *attr = &event->attr;
diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h
index 144bcc3ff19f..518f28c6a7d4 100644
--- a/include/linux/perf_regs.h
+++ b/include/linux/perf_regs.h
@@ -14,6 +14,10 @@ int perf_reg_validate(u64 mask);
u64 perf_reg_abi(struct task_struct *task);
void perf_get_regs_user(struct perf_regs *regs_user,
struct pt_regs *regs);
+int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
+ u16 pred_qwords, u32 pred_mask);
+u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
+ u16 qwords_idx, bool pred);
#ifdef CONFIG_HAVE_PERF_REGS
#include <asm/perf_regs.h>
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index fd10aa8d697f..b8c8953928f8 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -314,8 +314,9 @@ enum {
*/
enum perf_sample_regs_abi {
PERF_SAMPLE_REGS_ABI_NONE = 0,
- PERF_SAMPLE_REGS_ABI_32 = 1,
- PERF_SAMPLE_REGS_ABI_64 = 2,
+ PERF_SAMPLE_REGS_ABI_32 = (1 << 0),
+ PERF_SAMPLE_REGS_ABI_64 = (1 << 1),
+ PERF_SAMPLE_REGS_ABI_SIMD = (1 << 2),
};
/*
@@ -383,6 +384,7 @@ enum perf_event_read_format {
#define PERF_ATTR_SIZE_VER7 128 /* Add: sig_data */
#define PERF_ATTR_SIZE_VER8 136 /* Add: config3 */
#define PERF_ATTR_SIZE_VER9 144 /* add: config4 */
+#define PERF_ATTR_SIZE_VER10 176 /* Add: sample_simd_{pred,vec}_reg_* */
/*
* 'struct perf_event_attr' contains various attributes that define
@@ -547,6 +549,30 @@ struct perf_event_attr {
__u64 config3; /* extension of config2 */
__u64 config4; /* extension of config3 */
+
+ /*
+ * Defines the sampling SIMD/PRED registers bitmap and qwords
+ * (8 bytes) length.
+ *
+ * sample_simd_regs_enabled != 0 indicates there are SIMD/PRED registers
+ * to be sampled, the SIMD/PRED registers bitmap and qwords length are
+ * represented in sample_{simd|pred}_pred_reg_{intr|user} and
+ * sample_simd_{vec|pred}_reg_qwords fields.
+ *
+ * sample_simd_regs_enabled == 0 indicates no SIMD/PRED registers are
+ * sampled.
+ */
+ union {
+ __u16 sample_simd_regs_enabled;
+ __u16 sample_simd_pred_reg_qwords;
+ };
+ __u16 sample_simd_vec_reg_qwords;
+ __u32 __reserved_4;
+
+ __u32 sample_simd_pred_reg_intr;
+ __u32 sample_simd_pred_reg_user;
+ __u64 sample_simd_vec_reg_intr;
+ __u64 sample_simd_vec_reg_user;
};
/*
@@ -1020,7 +1046,15 @@ enum perf_event_type {
* } && PERF_SAMPLE_BRANCH_STACK
*
* { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
+ * u64 regs[weight(mask)];
+ * struct {
+ * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_user)
+ * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
+ * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_user)
+ * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
+ * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
+ * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ * } && PERF_SAMPLE_REGS_USER
*
* { u64 size;
* char data[size];
@@ -1047,7 +1081,15 @@ enum perf_event_type {
* { u64 data_src; } && PERF_SAMPLE_DATA_SRC
* { u64 transaction; } && PERF_SAMPLE_TRANSACTION
* { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
+ * u64 regs[weight(mask)];
+ * struct {
+ * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_intr)
+ * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
+ * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_intr)
+ * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
+ * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
+ * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ * } && PERF_SAMPLE_REGS_INTR
* { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
* { u64 cgroup;} && PERF_SAMPLE_CGROUP
* { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 7558bc5b1e73..de42575f517b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7753,22 +7753,60 @@ void __weak perf_get_regs_user(struct perf_regs *regs_user,
regs_user->abi = perf_reg_abi(current);
}
+#define word_for_each_set_bit(bit, val) \
+ for (unsigned long long __v = (val); \
+ __v && ((bit = __builtin_ctzll(__v)), 1); \
+ __v &= __v - 1)
+
static void
perf_output_sample_regs(struct perf_output_handle *handle,
struct pt_regs *regs, u64 mask)
{
int bit;
- DECLARE_BITMAP(_mask, 64);
-
- bitmap_from_u64(_mask, mask);
- for_each_set_bit(bit, _mask, sizeof(mask) * BITS_PER_BYTE) {
- u64 val;
- val = perf_reg_value(regs, bit);
+ word_for_each_set_bit(bit, mask) {
+ u64 val = perf_reg_value(regs, bit);
perf_output_put(handle, val);
}
}
+static void
+perf_output_sample_simd_regs(struct perf_output_handle *handle,
+ struct perf_event *event,
+ struct pt_regs *regs,
+ u64 mask, u32 pred_mask)
+{
+ u16 pred_qwords = event->attr.sample_simd_pred_reg_qwords;
+ u16 vec_qwords = event->attr.sample_simd_vec_reg_qwords;
+ u16 nr_vectors = hweight64(mask);
+ u16 nr_pred = hweight32(pred_mask);
+ int bit;
+
+ perf_output_put(handle, nr_vectors);
+ perf_output_put(handle, vec_qwords);
+ perf_output_put(handle, nr_pred);
+ perf_output_put(handle, pred_qwords);
+
+ if (nr_vectors) {
+ word_for_each_set_bit(bit, mask) {
+ for (int i = 0; i < vec_qwords; i++) {
+ u64 val = perf_simd_reg_value(regs, bit,
+ i, false);
+ perf_output_put(handle, val);
+ }
+ }
+ }
+ if (nr_pred) {
+ word_for_each_set_bit(bit, pred_mask) {
+ for (int i = 0; i < pred_qwords; i++) {
+ u64 val = perf_simd_reg_value(regs, bit,
+ i, true);
+ perf_output_put(handle, val);
+ }
+ }
+ }
+}
+
static void perf_sample_regs_user(struct perf_regs *regs_user,
struct pt_regs *regs)
{
@@ -7790,6 +7828,17 @@ static void perf_sample_regs_intr(struct perf_regs *regs_intr,
regs_intr->abi = perf_reg_abi(current);
}
+int __weak perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
+ u16 pred_qwords, u32 pred_mask)
+{
+ return vec_qwords || vec_mask || pred_qwords || pred_mask ? -ENOSYS : 0;
+}
+
+u64 __weak perf_simd_reg_value(struct pt_regs *regs, int idx,
+ u16 qwords_idx, bool pred)
+{
+ return 0;
+}
/*
* Get remaining task size from user stack pointer.
@@ -8320,10 +8369,17 @@ void perf_output_sample(struct perf_output_handle *handle,
perf_output_put(handle, abi);
if (abi) {
- u64 mask = event->attr.sample_regs_user;
+ struct perf_event_attr *attr = &event->attr;
+ u64 mask = attr->sample_regs_user;
perf_output_sample_regs(handle,
data->regs_user.regs,
mask);
+ if (abi & PERF_SAMPLE_REGS_ABI_SIMD) {
+ perf_output_sample_simd_regs(handle, event,
+ data->regs_user.regs,
+ attr->sample_simd_vec_reg_user,
+ attr->sample_simd_pred_reg_user);
+ }
}
}
@@ -8351,11 +8407,18 @@ void perf_output_sample(struct perf_output_handle *handle,
perf_output_put(handle, abi);
if (abi) {
- u64 mask = event->attr.sample_regs_intr;
+ struct perf_event_attr *attr = &event->attr;
+ u64 mask = attr->sample_regs_intr;
perf_output_sample_regs(handle,
data->regs_intr.regs,
mask);
+ if (abi & PERF_SAMPLE_REGS_ABI_SIMD) {
+ perf_output_sample_simd_regs(handle, event,
+ data->regs_intr.regs,
+ attr->sample_simd_vec_reg_intr,
+ attr->sample_simd_pred_reg_intr);
+ }
}
}
@@ -13011,6 +13074,12 @@ static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
if (ret)
goto err_pmu;
+ if (!(pmu->capabilities & PERF_PMU_CAP_SIMD_REGS) &&
+ event_has_simd_regs(event)) {
+ ret = -EOPNOTSUPP;
+ goto err_destroy;
+ }
+
if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) &&
event_has_extended_regs(event)) {
ret = -EOPNOTSUPP;
@@ -13556,6 +13625,12 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
ret = perf_reg_validate(attr->sample_regs_user);
if (ret)
return ret;
+ ret = perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords,
+ attr->sample_simd_vec_reg_user,
+ attr->sample_simd_pred_reg_qwords,
+ attr->sample_simd_pred_reg_user);
+ if (ret)
+ return ret;
}
if (attr->sample_type & PERF_SAMPLE_STACK_USER) {
@@ -13576,8 +13651,17 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
if (!attr->sample_max_stack)
attr->sample_max_stack = sysctl_perf_event_max_stack;
- if (attr->sample_type & PERF_SAMPLE_REGS_INTR)
+ if (attr->sample_type & PERF_SAMPLE_REGS_INTR) {
ret = perf_reg_validate(attr->sample_regs_intr);
+ if (ret)
+ return ret;
+ ret = perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords,
+ attr->sample_simd_vec_reg_intr,
+ attr->sample_simd_pred_reg_qwords,
+ attr->sample_simd_pred_reg_intr);
+ if (ret)
+ return ret;
+ }
#ifndef CONFIG_CGROUP_PERF
if (attr->sample_type & PERF_SAMPLE_CGROUP)
--git a/tools/perf/util/header.c b/tools/perf/util/header.c
index 9142a8ba4019..f84200b9dd57 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -2051,7 +2051,8 @@ static void free_event_desc(struct evsel *events)
static bool perf_attr_check(struct perf_event_attr *attr)
{
- if (attr->__reserved_1 || attr->__reserved_2 || attr->__reserved_3) {
+ if (attr->__reserved_1 || attr->__reserved_2 ||
+ attr->__reserved_3 || attr->__reserved_4) {
pr_warning("Reserved bits are set unexpectedly. "
"Please update perf tool.\n");
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (12 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-25 9:01 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 15/24] perf/x86: Enable YMM " Dapeng Mi
` (11 subsequent siblings)
25 siblings, 1 reply; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
This patch adds support for sampling XMM registers using the
sample_simd_vec_reg_* fields.
When sample_simd_regs_enabled is set, the original XMM space in the
sample_regs_* field is treated as reserved. An INVAL error will be
reported to user space if any bit is set in the original XMM space while
sample_simd_regs_enabled is set.
The perf_reg_value function requires ABI information to understand the
layout of sample_regs. To accommodate this, a new abi field is introduced
in the struct x86_perf_regs to represent ABI information.
Additionally, the X86-specific perf_simd_reg_value function is implemented
to retrieve the XMM register values.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 89 +++++++++++++++++++++++++--
arch/x86/events/intel/ds.c | 2 +-
arch/x86/events/perf_event.h | 12 ++++
arch/x86/include/asm/perf_event.h | 1 +
arch/x86/include/uapi/asm/perf_regs.h | 13 ++++
arch/x86/kernel/perf_regs.c | 51 ++++++++++++++-
6 files changed, 161 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index a5643c875190..3c9b79b46a66 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -704,6 +704,22 @@ int x86_pmu_hw_config(struct perf_event *event)
if (event_has_extended_regs(event)) {
if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
return -EINVAL;
+ if (event->attr.sample_simd_regs_enabled)
+ return -EINVAL;
+ }
+
+ if (event_has_simd_regs(event)) {
+ if (!(event->pmu->capabilities & PERF_PMU_CAP_SIMD_REGS))
+ return -EINVAL;
+ /* Not require any vector registers but set width */
+ if (event->attr.sample_simd_vec_reg_qwords &&
+ !event->attr.sample_simd_vec_reg_intr &&
+ !event->attr.sample_simd_vec_reg_user)
+ return -EINVAL;
+ /* The vector registers set is not supported */
+ if (event_needs_xmm(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_SSE))
+ return -EINVAL;
}
}
@@ -1749,6 +1765,7 @@ static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data,
struct x86_perf_regs *x86_regs_user = this_cpu_ptr(&x86_user_regs);
struct perf_regs regs_user;
+ x86_regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
perf_get_regs_user(®s_user, regs);
data->regs_user.abi = regs_user.abi;
if (regs_user.regs) {
@@ -1758,12 +1775,26 @@ static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data,
data->regs_user.regs = NULL;
}
+static inline void
+x86_pmu_update_xregs_size(struct perf_event_attr *attr,
+ struct perf_sample_data *data,
+ struct pt_regs *regs,
+ u64 mask, u64 pred_mask)
+{
+ u16 pred_qwords = attr->sample_simd_pred_reg_qwords;
+ u16 vec_qwords = attr->sample_simd_vec_reg_qwords;
+
+ data->dyn_size += (hweight64(mask) * vec_qwords +
+ hweight64(pred_mask) * pred_qwords) * sizeof(u64);
+}
+
static void x86_pmu_setup_gpregs_data(struct perf_event *event,
struct perf_sample_data *data,
struct pt_regs *regs)
{
struct perf_event_attr *attr = &event->attr;
u64 sample_type = attr->sample_type;
+ struct x86_perf_regs *perf_regs;
if (sample_type & PERF_SAMPLE_REGS_USER) {
if (user_mode(regs)) {
@@ -1783,8 +1814,13 @@ static void x86_pmu_setup_gpregs_data(struct perf_event *event,
data->regs_user.regs = NULL;
}
data->dyn_size += sizeof(u64);
- if (data->regs_user.regs)
- data->dyn_size += hweight64(attr->sample_regs_user) * sizeof(u64);
+ if (data->regs_user.regs) {
+ data->dyn_size +=
+ hweight64(attr->sample_regs_user) * sizeof(u64);
+ perf_regs = container_of(data->regs_user.regs,
+ struct x86_perf_regs, regs);
+ perf_regs->abi = data->regs_user.abi;
+ }
data->sample_flags |= PERF_SAMPLE_REGS_USER;
}
@@ -1792,8 +1828,13 @@ static void x86_pmu_setup_gpregs_data(struct perf_event *event,
data->regs_intr.regs = regs;
data->regs_intr.abi = perf_reg_abi(current);
data->dyn_size += sizeof(u64);
- if (data->regs_intr.regs)
- data->dyn_size += hweight64(attr->sample_regs_intr) * sizeof(u64);
+ if (data->regs_intr.regs) {
+ data->dyn_size +=
+ hweight64(attr->sample_regs_intr) * sizeof(u64);
+ perf_regs = container_of(data->regs_intr.regs,
+ struct x86_perf_regs, regs);
+ perf_regs->abi = data->regs_intr.abi;
+ }
data->sample_flags |= PERF_SAMPLE_REGS_INTR;
}
}
@@ -1871,7 +1912,7 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
if (WARN_ON_ONCE(!xsave))
return;
- if (event_has_extended_regs(event))
+ if (event_needs_xmm(event))
mask |= XFEATURE_MASK_SSE;
mask &= x86_pmu.ext_regs_mask;
@@ -1899,6 +1940,43 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
}
}
+static void x86_pmu_setup_xregs_data(struct perf_event *event,
+ struct perf_sample_data *data)
+{
+ struct perf_event_attr *attr = &event->attr;
+ u64 sample_type = attr->sample_type;
+ struct x86_perf_regs *perf_regs;
+
+ if (!attr->sample_simd_regs_enabled)
+ return;
+
+ if (sample_type & PERF_SAMPLE_REGS_USER && data->regs_user.abi) {
+ perf_regs = container_of(data->regs_user.regs,
+ struct x86_perf_regs, regs);
+ perf_regs->abi |= PERF_SAMPLE_REGS_ABI_SIMD;
+
+ /* num and qwords of vector and pred registers */
+ data->dyn_size += sizeof(u64);
+ data->regs_user.abi |= PERF_SAMPLE_REGS_ABI_SIMD;
+ x86_pmu_update_xregs_size(attr, data, data->regs_user.regs,
+ attr->sample_simd_vec_reg_user,
+ attr->sample_simd_pred_reg_user);
+ }
+
+ if (sample_type & PERF_SAMPLE_REGS_INTR && data->regs_intr.abi) {
+ perf_regs = container_of(data->regs_intr.regs,
+ struct x86_perf_regs, regs);
+ perf_regs->abi |= PERF_SAMPLE_REGS_ABI_SIMD;
+
+ /* num and qwords of vector and pred registers */
+ data->dyn_size += sizeof(u64);
+ data->regs_intr.abi |= PERF_SAMPLE_REGS_ABI_SIMD;
+ x86_pmu_update_xregs_size(attr, data, data->regs_intr.regs,
+ attr->sample_simd_vec_reg_intr,
+ attr->sample_simd_pred_reg_intr);
+ }
+}
+
void x86_pmu_setup_regs_data(struct perf_event *event,
struct perf_sample_data *data,
struct pt_regs *regs,
@@ -1910,6 +1988,7 @@ void x86_pmu_setup_regs_data(struct perf_event *event,
* which are unnecessary to sample again.
*/
x86_pmu_sample_xregs(event, data, ignore_mask);
+ x86_pmu_setup_xregs_data(event, data);
}
int x86_pmu_handle_irq(struct pt_regs *regs)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 74a41dae8a62..ac9a1c2f0177 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1743,7 +1743,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
if (gprs || (attr->precise_ip < 2) || tsx_weight)
pebs_data_cfg |= PEBS_DATACFG_GP;
- if (event_has_extended_regs(event))
+ if (event_needs_xmm(event))
pebs_data_cfg |= PEBS_DATACFG_XMMS;
if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index a5e5bffb711e..26d162794a36 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -137,6 +137,18 @@ static inline bool is_acr_event_group(struct perf_event *event)
return check_leader_group(event->group_leader, PERF_X86_EVENT_ACR);
}
+static inline bool event_needs_xmm(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ event->attr.sample_simd_vec_reg_qwords >= PERF_X86_XMM_QWORDS)
+ return true;
+
+ if (!event->attr.sample_simd_regs_enabled &&
+ event_has_extended_regs(event))
+ return true;
+ return false;
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index e47a963a7cf0..e54d21c13494 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -726,6 +726,7 @@ extern void perf_events_lapic_init(void);
struct pt_regs;
struct x86_perf_regs {
struct pt_regs regs;
+ u64 abi;
union {
u64 *xmm_regs;
u32 *xmm_space; /* for xsaves */
diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
index 7c9d2bb3833b..c5c1b3930df1 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++ b/arch/x86/include/uapi/asm/perf_regs.h
@@ -55,4 +55,17 @@ enum perf_event_x86_regs {
#define PERF_REG_EXTENDED_MASK (~((1ULL << PERF_REG_X86_XMM0) - 1))
+enum {
+ PERF_X86_SIMD_XMM_REGS = 16,
+ PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_XMM_REGS,
+};
+
+#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1, 0)
+
+enum {
+ /* 1 qword = 8 bytes */
+ PERF_X86_XMM_QWORDS = 2,
+ PERF_X86_SIMD_QWORDS_MAX = PERF_X86_XMM_QWORDS,
+};
+
#endif /* _ASM_X86_PERF_REGS_H */
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index 81204cb7f723..9947a6b5c260 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -63,6 +63,9 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
if (idx >= PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) {
perf_regs = container_of(regs, struct x86_perf_regs, regs);
+ /* SIMD registers are moved to dedicated sample_simd_vec_reg */
+ if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ return 0;
if (!perf_regs->xmm_regs)
return 0;
return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0];
@@ -74,6 +77,51 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return regs_get_register(regs, pt_regs_offset[idx]);
}
+u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
+ u16 qwords_idx, bool pred)
+{
+ struct x86_perf_regs *perf_regs =
+ container_of(regs, struct x86_perf_regs, regs);
+
+ if (pred)
+ return 0;
+
+ if (WARN_ON_ONCE(idx >= PERF_X86_SIMD_VEC_REGS_MAX ||
+ qwords_idx >= PERF_X86_SIMD_QWORDS_MAX))
+ return 0;
+
+ if (qwords_idx < PERF_X86_XMM_QWORDS) {
+ if (!perf_regs->xmm_regs)
+ return 0;
+ return perf_regs->xmm_regs[idx * PERF_X86_XMM_QWORDS +
+ qwords_idx];
+ }
+
+ return 0;
+}
+
+int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
+ u16 pred_qwords, u32 pred_mask)
+{
+ /* pred_qwords implies sample_simd_{pred,vec}_reg_* are supported */
+ if (!pred_qwords)
+ return 0;
+
+ if (!vec_qwords) {
+ if (vec_mask)
+ return -EINVAL;
+ } else {
+ if (vec_qwords != PERF_X86_XMM_QWORDS)
+ return -EINVAL;
+ if (vec_mask & ~PERF_X86_SIMD_VEC_MASK)
+ return -EINVAL;
+ }
+ if (pred_mask)
+ return -EINVAL;
+
+ return 0;
+}
+
#define PERF_REG_X86_RESERVED (((1ULL << PERF_REG_X86_XMM0) - 1) & \
~((1ULL << PERF_REG_X86_MAX) - 1))
@@ -108,7 +156,8 @@ u64 perf_reg_abi(struct task_struct *task)
int perf_reg_validate(u64 mask)
{
- if (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)))
+ /* The mask could be 0 if only the SIMD registers are interested */
+ if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))
return -EINVAL;
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 15/24] perf/x86: Enable YMM sampling using sample_simd_vec_reg_* fields
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (13 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 16/24] perf/x86: Enable ZMM " Dapeng Mi
` (10 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
This patch introduces support for sampling YMM registers via the
sample_simd_vec_reg_* fields.
Each YMM register consists of 4 u64 words, assembled from two halves:
XMM (the lower 2 u64 words) and YMMH (the upper 2 u64 words). Although
both XMM and YMMH data can be retrieved with a single xsaves instruction,
they are stored in separate locations. The perf_simd_reg_value() function
is responsible for assembling these halves into a complete YMM register
for output to userspace.
Additionally, sample_simd_vec_reg_qwords should be set to 4 to indicate
YMM sampling.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 8 ++++++++
arch/x86/events/perf_event.h | 9 +++++++++
arch/x86/include/asm/perf_event.h | 4 ++++
arch/x86/include/uapi/asm/perf_regs.h | 6 ++++--
arch/x86/kernel/perf_regs.c | 10 +++++++++-
5 files changed, 34 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 3c9b79b46a66..cdea5a10ec9f 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -720,6 +720,9 @@ int x86_pmu_hw_config(struct perf_event *event)
if (event_needs_xmm(event) &&
!(x86_pmu.ext_regs_mask & XFEATURE_MASK_SSE))
return -EINVAL;
+ if (event_needs_ymm(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_YMM))
+ return -EINVAL;
}
}
@@ -1844,6 +1847,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
struct x86_perf_regs *perf_regs = container_of(regs, struct x86_perf_regs, regs);
perf_regs->xmm_regs = NULL;
+ perf_regs->ymmh_regs = NULL;
}
static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
@@ -1859,6 +1863,8 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
if (mask & XFEATURE_MASK_SSE)
perf_regs->xmm_space = xsave->i387.xmm_space;
+ if (mask & XFEATURE_MASK_YMM)
+ perf_regs->ymmh = get_xsave_addr(xsave, XFEATURE_YMM);
}
/*
@@ -1914,6 +1920,8 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
if (event_needs_xmm(event))
mask |= XFEATURE_MASK_SSE;
+ if (event_needs_ymm(event))
+ mask |= XFEATURE_MASK_YMM;
mask &= x86_pmu.ext_regs_mask;
if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 26d162794a36..8d5484462f75 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -149,6 +149,15 @@ static inline bool event_needs_xmm(struct perf_event *event)
return false;
}
+static inline bool event_needs_ymm(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ event->attr.sample_simd_vec_reg_qwords >= PERF_X86_YMM_QWORDS)
+ return true;
+
+ return false;
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index e54d21c13494..1d03b86be65d 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -731,6 +731,10 @@ struct x86_perf_regs {
u64 *xmm_regs;
u32 *xmm_space; /* for xsaves */
};
+ union {
+ u64 *ymmh_regs;
+ struct ymmh_struct *ymmh;
+ };
};
extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
index c5c1b3930df1..42d53978ea72 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++ b/arch/x86/include/uapi/asm/perf_regs.h
@@ -57,7 +57,8 @@ enum perf_event_x86_regs {
enum {
PERF_X86_SIMD_XMM_REGS = 16,
- PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_XMM_REGS,
+ PERF_X86_SIMD_YMM_REGS = 16,
+ PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_YMM_REGS,
};
#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1, 0)
@@ -65,7 +66,8 @@ enum {
enum {
/* 1 qword = 8 bytes */
PERF_X86_XMM_QWORDS = 2,
- PERF_X86_SIMD_QWORDS_MAX = PERF_X86_XMM_QWORDS,
+ PERF_X86_YMM_QWORDS = 4,
+ PERF_X86_SIMD_QWORDS_MAX = PERF_X86_YMM_QWORDS,
};
#endif /* _ASM_X86_PERF_REGS_H */
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index 9947a6b5c260..4062a679cc5b 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -77,6 +77,8 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return regs_get_register(regs, pt_regs_offset[idx]);
}
+#define PERF_X86_YMMH_QWORDS (PERF_X86_YMM_QWORDS / 2)
+
u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
u16 qwords_idx, bool pred)
{
@@ -95,6 +97,11 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
return 0;
return perf_regs->xmm_regs[idx * PERF_X86_XMM_QWORDS +
qwords_idx];
+ } else if (qwords_idx < PERF_X86_YMM_QWORDS) {
+ if (!perf_regs->ymmh_regs)
+ return 0;
+ return perf_regs->ymmh_regs[idx * PERF_X86_YMMH_QWORDS +
+ qwords_idx - PERF_X86_XMM_QWORDS];
}
return 0;
@@ -111,7 +118,8 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
if (vec_mask)
return -EINVAL;
} else {
- if (vec_qwords != PERF_X86_XMM_QWORDS)
+ if (vec_qwords != PERF_X86_XMM_QWORDS &&
+ vec_qwords != PERF_X86_YMM_QWORDS)
return -EINVAL;
if (vec_mask & ~PERF_X86_SIMD_VEC_MASK)
return -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 16/24] perf/x86: Enable ZMM sampling using sample_simd_vec_reg_* fields
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (14 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 15/24] perf/x86: Enable YMM " Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
` (9 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
This patch adds support for sampling ZMM registers via the
sample_simd_vec_reg_* fields.
Each ZMM register consists of 8 u64 words. Current x86 hardware supports
up to 32 ZMM registers. For ZMM registers from ZMM0 to ZMM15, they are
assembled from three parts: XMM (the lower 2 u64 words),
YMMH (the middle 2 u64 words), and ZMMH (the upper 4 u64 words). The
perf_simd_reg_value() function is responsible for assembling these three
parts into a complete ZMM register for output to userspace.
For ZMM registers ZMM16 to ZMM31, each register can be read as a whole
and directly outputted to userspace.
Additionally, sample_simd_vec_reg_qwords should be set to 8 to indicate
ZMM sampling.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 16 ++++++++++++++++
arch/x86/events/perf_event.h | 19 +++++++++++++++++++
arch/x86/include/asm/perf_event.h | 8 ++++++++
arch/x86/include/uapi/asm/perf_regs.h | 8 ++++++--
arch/x86/kernel/perf_regs.c | 16 +++++++++++++++-
5 files changed, 64 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index cdea5a10ec9f..e5f5a6971d72 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -723,6 +723,12 @@ int x86_pmu_hw_config(struct perf_event *event)
if (event_needs_ymm(event) &&
!(x86_pmu.ext_regs_mask & XFEATURE_MASK_YMM))
return -EINVAL;
+ if (event_needs_low16_zmm(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_ZMM_Hi256))
+ return -EINVAL;
+ if (event_needs_high16_zmm(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_Hi16_ZMM))
+ return -EINVAL;
}
}
@@ -1848,6 +1854,8 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
perf_regs->xmm_regs = NULL;
perf_regs->ymmh_regs = NULL;
+ perf_regs->zmmh_regs = NULL;
+ perf_regs->h16zmm_regs = NULL;
}
static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
@@ -1865,6 +1873,10 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
perf_regs->xmm_space = xsave->i387.xmm_space;
if (mask & XFEATURE_MASK_YMM)
perf_regs->ymmh = get_xsave_addr(xsave, XFEATURE_YMM);
+ if (mask & XFEATURE_MASK_ZMM_Hi256)
+ perf_regs->zmmh = get_xsave_addr(xsave, XFEATURE_ZMM_Hi256);
+ if (mask & XFEATURE_MASK_Hi16_ZMM)
+ perf_regs->h16zmm = get_xsave_addr(xsave, XFEATURE_Hi16_ZMM);
}
/*
@@ -1922,6 +1934,10 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
mask |= XFEATURE_MASK_SSE;
if (event_needs_ymm(event))
mask |= XFEATURE_MASK_YMM;
+ if (event_needs_low16_zmm(event))
+ mask |= XFEATURE_MASK_ZMM_Hi256;
+ if (event_needs_high16_zmm(event))
+ mask |= XFEATURE_MASK_Hi16_ZMM;
mask &= x86_pmu.ext_regs_mask;
if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 8d5484462f75..841c8880e6fd 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -158,6 +158,25 @@ static inline bool event_needs_ymm(struct perf_event *event)
return false;
}
+static inline bool event_needs_low16_zmm(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ event->attr.sample_simd_vec_reg_qwords >= PERF_X86_ZMM_QWORDS)
+ return true;
+
+ return false;
+}
+
+static inline bool event_needs_high16_zmm(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ (fls64(event->attr.sample_simd_vec_reg_intr) > PERF_X86_H16ZMM_BASE ||
+ fls64(event->attr.sample_simd_vec_reg_user) > PERF_X86_H16ZMM_BASE))
+ return true;
+
+ return false;
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 1d03b86be65d..273840bd7b33 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -735,6 +735,14 @@ struct x86_perf_regs {
u64 *ymmh_regs;
struct ymmh_struct *ymmh;
};
+ union {
+ u64 *zmmh_regs;
+ struct avx_512_zmm_uppers_state *zmmh;
+ };
+ union {
+ u64 *h16zmm_regs;
+ struct avx_512_hi16_state *h16zmm;
+ };
};
extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
index 42d53978ea72..a889fd92f2f0 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++ b/arch/x86/include/uapi/asm/perf_regs.h
@@ -58,16 +58,20 @@ enum perf_event_x86_regs {
enum {
PERF_X86_SIMD_XMM_REGS = 16,
PERF_X86_SIMD_YMM_REGS = 16,
- PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_YMM_REGS,
+ PERF_X86_SIMD_ZMM_REGS = 32,
+ PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_ZMM_REGS,
};
#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1, 0)
+#define PERF_X86_H16ZMM_BASE 16
+
enum {
/* 1 qword = 8 bytes */
PERF_X86_XMM_QWORDS = 2,
PERF_X86_YMM_QWORDS = 4,
- PERF_X86_SIMD_QWORDS_MAX = PERF_X86_YMM_QWORDS,
+ PERF_X86_ZMM_QWORDS = 8,
+ PERF_X86_SIMD_QWORDS_MAX = PERF_X86_ZMM_QWORDS,
};
#endif /* _ASM_X86_PERF_REGS_H */
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index 4062a679cc5b..fe4ff4d2de88 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -78,6 +78,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
}
#define PERF_X86_YMMH_QWORDS (PERF_X86_YMM_QWORDS / 2)
+#define PERF_X86_ZMMH_QWORDS (PERF_X86_ZMM_QWORDS / 2)
u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
u16 qwords_idx, bool pred)
@@ -92,6 +93,13 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
qwords_idx >= PERF_X86_SIMD_QWORDS_MAX))
return 0;
+ if (idx >= PERF_X86_H16ZMM_BASE) {
+ if (!perf_regs->h16zmm_regs)
+ return 0;
+ return perf_regs->h16zmm_regs[(idx - PERF_X86_H16ZMM_BASE) *
+ PERF_X86_ZMM_QWORDS + qwords_idx];
+ }
+
if (qwords_idx < PERF_X86_XMM_QWORDS) {
if (!perf_regs->xmm_regs)
return 0;
@@ -102,6 +110,11 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
return 0;
return perf_regs->ymmh_regs[idx * PERF_X86_YMMH_QWORDS +
qwords_idx - PERF_X86_XMM_QWORDS];
+ } else if (qwords_idx < PERF_X86_ZMM_QWORDS) {
+ if (!perf_regs->zmmh_regs)
+ return 0;
+ return perf_regs->zmmh_regs[idx * PERF_X86_ZMMH_QWORDS +
+ qwords_idx - PERF_X86_YMM_QWORDS];
}
return 0;
@@ -119,7 +132,8 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
return -EINVAL;
} else {
if (vec_qwords != PERF_X86_XMM_QWORDS &&
- vec_qwords != PERF_X86_YMM_QWORDS)
+ vec_qwords != PERF_X86_YMM_QWORDS &&
+ vec_qwords != PERF_X86_ZMM_QWORDS)
return -EINVAL;
if (vec_mask & ~PERF_X86_SIMD_VEC_MASK)
return -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (15 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 16/24] perf/x86: Enable ZMM " Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
` (8 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
This patch adds support for sampling OPAMSK registers via the
sample_simd_pred_reg_* fields.
Each OPMASK register consists of 1 u64 word. Current x86 hardware
supports 8 OPMASK registers. The perf_simd_reg_value() function is
responsible for outputting OPMASK value to userspace.
Additionally, sample_simd_pred_reg_qwords should be set to 1 to indicate
OPMASK sampling.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 8 ++++++++
arch/x86/events/perf_event.h | 10 ++++++++++
arch/x86/include/asm/perf_event.h | 4 ++++
arch/x86/include/uapi/asm/perf_regs.h | 5 +++++
arch/x86/kernel/perf_regs.c | 15 ++++++++++++---
5 files changed, 39 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index e5f5a6971d72..d86a4fbea1ed 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -729,6 +729,9 @@ int x86_pmu_hw_config(struct perf_event *event)
if (event_needs_high16_zmm(event) &&
!(x86_pmu.ext_regs_mask & XFEATURE_MASK_Hi16_ZMM))
return -EINVAL;
+ if (event_needs_opmask(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_OPMASK))
+ return -EINVAL;
}
}
@@ -1856,6 +1859,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
perf_regs->ymmh_regs = NULL;
perf_regs->zmmh_regs = NULL;
perf_regs->h16zmm_regs = NULL;
+ perf_regs->opmask_regs = NULL;
}
static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
@@ -1877,6 +1881,8 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
perf_regs->zmmh = get_xsave_addr(xsave, XFEATURE_ZMM_Hi256);
if (mask & XFEATURE_MASK_Hi16_ZMM)
perf_regs->h16zmm = get_xsave_addr(xsave, XFEATURE_Hi16_ZMM);
+ if (mask & XFEATURE_MASK_OPMASK)
+ perf_regs->opmask = get_xsave_addr(xsave, XFEATURE_OPMASK);
}
/*
@@ -1938,6 +1944,8 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
mask |= XFEATURE_MASK_ZMM_Hi256;
if (event_needs_high16_zmm(event))
mask |= XFEATURE_MASK_Hi16_ZMM;
+ if (event_needs_opmask(event))
+ mask |= XFEATURE_MASK_OPMASK;
mask &= x86_pmu.ext_regs_mask;
if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 841c8880e6fd..00f436f5840b 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -177,6 +177,16 @@ static inline bool event_needs_high16_zmm(struct perf_event *event)
return false;
}
+static inline bool event_needs_opmask(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ (event->attr.sample_simd_pred_reg_intr ||
+ event->attr.sample_simd_pred_reg_user))
+ return true;
+
+ return false;
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 273840bd7b33..7e8b60bddd5a 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -743,6 +743,10 @@ struct x86_perf_regs {
u64 *h16zmm_regs;
struct avx_512_hi16_state *h16zmm;
};
+ union {
+ u64 *opmask_regs;
+ struct avx_512_opmask_state *opmask;
+ };
};
extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
index a889fd92f2f0..f4a1630c1928 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++ b/arch/x86/include/uapi/asm/perf_regs.h
@@ -60,14 +60,19 @@ enum {
PERF_X86_SIMD_YMM_REGS = 16,
PERF_X86_SIMD_ZMM_REGS = 32,
PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_ZMM_REGS,
+
+ PERF_X86_SIMD_OPMASK_REGS = 8,
+ PERF_X86_SIMD_PRED_REGS_MAX = PERF_X86_SIMD_OPMASK_REGS,
};
+#define PERF_X86_SIMD_PRED_MASK GENMASK(PERF_X86_SIMD_PRED_REGS_MAX - 1, 0)
#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1, 0)
#define PERF_X86_H16ZMM_BASE 16
enum {
/* 1 qword = 8 bytes */
+ PERF_X86_OPMASK_QWORDS = 1,
PERF_X86_XMM_QWORDS = 2,
PERF_X86_YMM_QWORDS = 4,
PERF_X86_ZMM_QWORDS = 8,
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index fe4ff4d2de88..2e3c10dffb35 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -86,8 +86,14 @@ u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
struct x86_perf_regs *perf_regs =
container_of(regs, struct x86_perf_regs, regs);
- if (pred)
- return 0;
+ if (pred) {
+ if (WARN_ON_ONCE(idx >= PERF_X86_SIMD_PRED_REGS_MAX ||
+ qwords_idx >= PERF_X86_OPMASK_QWORDS))
+ return 0;
+ if (!perf_regs->opmask_regs)
+ return 0;
+ return perf_regs->opmask_regs[idx];
+ }
if (WARN_ON_ONCE(idx >= PERF_X86_SIMD_VEC_REGS_MAX ||
qwords_idx >= PERF_X86_SIMD_QWORDS_MAX))
@@ -138,7 +144,10 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
if (vec_mask & ~PERF_X86_SIMD_VEC_MASK)
return -EINVAL;
}
- if (pred_mask)
+
+ if (pred_qwords != PERF_X86_OPMASK_QWORDS)
+ return -EINVAL;
+ if (pred_mask & ~PERF_X86_SIMD_PRED_MASK)
return -EINVAL;
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (16 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
` (7 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
The upcoming patch will support x86 APX eGPRs sampling by using the
reclaimed XMM register space to represent eGPRs in sample_regs_* fields.
To differentiate between XMM and eGPRs in sample_regs_* fields, an
additional argument, simd_enabled, is introduced to the
perf_reg_validate() helper. If simd_enabled is set to 1, it indicates
that eGPRs are represented in sample_regs_* fields for the x86 platform;
otherwise, XMM registers are represented.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/arm/kernel/perf_regs.c | 2 +-
arch/arm64/kernel/perf_regs.c | 2 +-
arch/csky/kernel/perf_regs.c | 2 +-
arch/loongarch/kernel/perf_regs.c | 2 +-
arch/mips/kernel/perf_regs.c | 2 +-
arch/parisc/kernel/perf_regs.c | 2 +-
arch/powerpc/perf/perf_regs.c | 2 +-
arch/riscv/kernel/perf_regs.c | 2 +-
arch/s390/kernel/perf_regs.c | 2 +-
arch/x86/kernel/perf_regs.c | 4 ++--
include/linux/perf_regs.h | 2 +-
kernel/events/core.c | 8 +++++---
12 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/arch/arm/kernel/perf_regs.c b/arch/arm/kernel/perf_regs.c
index d575a4c3ca56..838d701adf4d 100644
--- a/arch/arm/kernel/perf_regs.c
+++ b/arch/arm/kernel/perf_regs.c
@@ -18,7 +18,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
#define REG_RESERVED (~((1ULL << PERF_REG_ARM_MAX) - 1))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || mask & REG_RESERVED)
return -EINVAL;
diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c
index 70e2f13f587f..71a3e0238de4 100644
--- a/arch/arm64/kernel/perf_regs.c
+++ b/arch/arm64/kernel/perf_regs.c
@@ -77,7 +77,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
#define REG_RESERVED (~((1ULL << PERF_REG_ARM64_MAX) - 1))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
u64 reserved_mask = REG_RESERVED;
diff --git a/arch/csky/kernel/perf_regs.c b/arch/csky/kernel/perf_regs.c
index 94601f37b596..c932a96afc56 100644
--- a/arch/csky/kernel/perf_regs.c
+++ b/arch/csky/kernel/perf_regs.c
@@ -18,7 +18,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
#define REG_RESERVED (~((1ULL << PERF_REG_CSKY_MAX) - 1))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || mask & REG_RESERVED)
return -EINVAL;
diff --git a/arch/loongarch/kernel/perf_regs.c b/arch/loongarch/kernel/perf_regs.c
index 8dd604f01745..164514f40ae0 100644
--- a/arch/loongarch/kernel/perf_regs.c
+++ b/arch/loongarch/kernel/perf_regs.c
@@ -25,7 +25,7 @@ u64 perf_reg_abi(struct task_struct *tsk)
}
#endif /* CONFIG_32BIT */
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask)
return -EINVAL;
diff --git a/arch/mips/kernel/perf_regs.c b/arch/mips/kernel/perf_regs.c
index 7736d3c5ebd2..00a5201dbd5d 100644
--- a/arch/mips/kernel/perf_regs.c
+++ b/arch/mips/kernel/perf_regs.c
@@ -28,7 +28,7 @@ u64 perf_reg_abi(struct task_struct *tsk)
}
#endif /* CONFIG_32BIT */
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask)
return -EINVAL;
diff --git a/arch/parisc/kernel/perf_regs.c b/arch/parisc/kernel/perf_regs.c
index b9fe1f2fcb9b..4f21aab5405c 100644
--- a/arch/parisc/kernel/perf_regs.c
+++ b/arch/parisc/kernel/perf_regs.c
@@ -34,7 +34,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
#define REG_RESERVED (~((1ULL << PERF_REG_PARISC_MAX) - 1))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || mask & REG_RESERVED)
return -EINVAL;
diff --git a/arch/powerpc/perf/perf_regs.c b/arch/powerpc/perf/perf_regs.c
index 350dccb0143c..a01d8a903640 100644
--- a/arch/powerpc/perf/perf_regs.c
+++ b/arch/powerpc/perf/perf_regs.c
@@ -125,7 +125,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return regs_get_register(regs, pt_regs_offset[idx]);
}
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || mask & REG_RESERVED)
return -EINVAL;
diff --git a/arch/riscv/kernel/perf_regs.c b/arch/riscv/kernel/perf_regs.c
index 3bba8deababb..1ecc8760b88b 100644
--- a/arch/riscv/kernel/perf_regs.c
+++ b/arch/riscv/kernel/perf_regs.c
@@ -18,7 +18,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
#define REG_RESERVED (~((1ULL << PERF_REG_RISCV_MAX) - 1))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || mask & REG_RESERVED)
return -EINVAL;
diff --git a/arch/s390/kernel/perf_regs.c b/arch/s390/kernel/perf_regs.c
index 7b305f1456f8..6496fd23c540 100644
--- a/arch/s390/kernel/perf_regs.c
+++ b/arch/s390/kernel/perf_regs.c
@@ -34,7 +34,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
#define REG_RESERVED (~((1UL << PERF_REG_S390_MAX) - 1))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || mask & REG_RESERVED)
return -EINVAL;
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index 2e3c10dffb35..9b3134220b3e 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -166,7 +166,7 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
(1ULL << PERF_REG_X86_R14) | \
(1ULL << PERF_REG_X86_R15))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
if (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)))
return -EINVAL;
@@ -185,7 +185,7 @@ u64 perf_reg_abi(struct task_struct *task)
(1ULL << PERF_REG_X86_FS) | \
(1ULL << PERF_REG_X86_GS))
-int perf_reg_validate(u64 mask)
+int perf_reg_validate(u64 mask, bool simd_enabled)
{
/* The mask could be 0 if only the SIMD registers are interested */
if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))
diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h
index 518f28c6a7d4..09dbc2fc3859 100644
--- a/include/linux/perf_regs.h
+++ b/include/linux/perf_regs.h
@@ -10,7 +10,7 @@ struct perf_regs {
};
u64 perf_reg_value(struct pt_regs *regs, int idx);
-int perf_reg_validate(u64 mask);
+int perf_reg_validate(u64 mask, bool simd_enabled);
u64 perf_reg_abi(struct task_struct *task);
void perf_get_regs_user(struct perf_regs *regs_user,
struct pt_regs *regs);
diff --git a/kernel/events/core.c b/kernel/events/core.c
index de42575f517b..797bddeca46a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7736,7 +7736,7 @@ u64 __weak perf_reg_value(struct pt_regs *regs, int idx)
return 0;
}
-int __weak perf_reg_validate(u64 mask)
+int __weak perf_reg_validate(u64 mask, bool simd_enabled)
{
return mask ? -ENOSYS : 0;
}
@@ -13622,7 +13622,8 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
}
if (attr->sample_type & PERF_SAMPLE_REGS_USER) {
- ret = perf_reg_validate(attr->sample_regs_user);
+ ret = perf_reg_validate(attr->sample_regs_user,
+ attr->sample_simd_regs_enabled);
if (ret)
return ret;
ret = perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords,
@@ -13652,7 +13653,8 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
attr->sample_max_stack = sysctl_perf_event_max_stack;
if (attr->sample_type & PERF_SAMPLE_REGS_INTR) {
- ret = perf_reg_validate(attr->sample_regs_intr);
+ ret = perf_reg_validate(attr->sample_regs_intr,
+ attr->sample_simd_regs_enabled);
if (ret)
return ret;
ret = perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords,
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (17 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
` (6 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
This patch enables sampling of APX eGPRs (R16 ~ R31) via the
sample_regs_* fields.
To sample eGPRs, the sample_simd_regs_enabled field must be set. This
allows the spare space (reclaimed from the original XMM space) in the
sample_regs_* fields to be used for representing eGPRs.
The perf_reg_value() function needs to check if the
PERF_SAMPLE_REGS_ABI_SIMD flag is set first, and then determine whether
to output eGPRs or legacy XMM registers to userspace.
The perf_reg_validate() function first checks the simd_enabled argument
to determine if the eGPRs bitmap is represented in sample_regs_* fields.
It then validates the eGPRs bitmap accordingly.
Currently, eGPRs sampling is only supported on the x86_64 architecture, as
APX is only available on x86_64 platforms.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 37 ++++++++++++++++-------
arch/x86/events/perf_event.h | 10 +++++++
arch/x86/include/asm/perf_event.h | 4 +++
arch/x86/include/uapi/asm/perf_regs.h | 26 ++++++++++++++++
arch/x86/kernel/perf_regs.c | 43 ++++++++++++++++-----------
5 files changed, 91 insertions(+), 29 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index d86a4fbea1ed..d33cfbe38573 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -697,20 +697,21 @@ int x86_pmu_hw_config(struct perf_event *event)
}
if (event->attr.sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
- /*
- * Besides the general purpose registers, XMM registers may
- * be collected as well.
- */
- if (event_has_extended_regs(event)) {
- if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
- return -EINVAL;
- if (event->attr.sample_simd_regs_enabled)
- return -EINVAL;
- }
-
if (event_has_simd_regs(event)) {
+ u64 reserved = ~GENMASK_ULL(PERF_REG_MISC_MAX - 1, 0);
+
if (!(event->pmu->capabilities & PERF_PMU_CAP_SIMD_REGS))
return -EINVAL;
+ /*
+ * The XMM space in the perf_event_x86_regs is reclaimed
+ * for eGPRs and other general registers.
+ */
+ if (event->attr.sample_regs_user & reserved ||
+ event->attr.sample_regs_intr & reserved)
+ return -EINVAL;
+ if (event_needs_egprs(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_APX))
+ return -EINVAL;
/* Not require any vector registers but set width */
if (event->attr.sample_simd_vec_reg_qwords &&
!event->attr.sample_simd_vec_reg_intr &&
@@ -732,6 +733,15 @@ int x86_pmu_hw_config(struct perf_event *event)
if (event_needs_opmask(event) &&
!(x86_pmu.ext_regs_mask & XFEATURE_MASK_OPMASK))
return -EINVAL;
+ } else {
+ /*
+ * Besides the general purpose registers, XMM registers may
+ * be collected as well.
+ */
+ if (event_has_extended_regs(event)) {
+ if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
+ return -EINVAL;
+ }
}
}
@@ -1860,6 +1870,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
perf_regs->zmmh_regs = NULL;
perf_regs->h16zmm_regs = NULL;
perf_regs->opmask_regs = NULL;
+ perf_regs->egpr_regs = NULL;
}
static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
@@ -1883,6 +1894,8 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
perf_regs->h16zmm = get_xsave_addr(xsave, XFEATURE_Hi16_ZMM);
if (mask & XFEATURE_MASK_OPMASK)
perf_regs->opmask = get_xsave_addr(xsave, XFEATURE_OPMASK);
+ if (mask & XFEATURE_MASK_APX)
+ perf_regs->egpr = get_xsave_addr(xsave, XFEATURE_APX);
}
/*
@@ -1946,6 +1959,8 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
mask |= XFEATURE_MASK_Hi16_ZMM;
if (event_needs_opmask(event))
mask |= XFEATURE_MASK_OPMASK;
+ if (event_needs_egprs(event))
+ mask |= XFEATURE_MASK_APX;
mask &= x86_pmu.ext_regs_mask;
if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 00f436f5840b..0974fd8b0e20 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -187,6 +187,16 @@ static inline bool event_needs_opmask(struct perf_event *event)
return false;
}
+static inline bool event_needs_egprs(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ (event->attr.sample_regs_user & PERF_X86_EGPRS_MASK ||
+ event->attr.sample_regs_intr & PERF_X86_EGPRS_MASK))
+ return true;
+
+ return false;
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 7e8b60bddd5a..a54ea8fa6a04 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -747,6 +747,10 @@ struct x86_perf_regs {
u64 *opmask_regs;
struct avx_512_opmask_state *opmask;
};
+ union {
+ u64 *egpr_regs;
+ struct apx_state *egpr;
+ };
};
extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
index f4a1630c1928..e721a47556d4 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++ b/arch/x86/include/uapi/asm/perf_regs.h
@@ -27,9 +27,34 @@ enum perf_event_x86_regs {
PERF_REG_X86_R13,
PERF_REG_X86_R14,
PERF_REG_X86_R15,
+ /*
+ * The eGPRs and XMM have overlaps. Only one can be used
+ * at a time. The ABI PERF_SAMPLE_REGS_ABI_SIMD is used to
+ * distinguish which one is used. If PERF_SAMPLE_REGS_ABI_SIMD
+ * is set, then eGPRs is used, otherwise, XMM is used.
+ *
+ * Extended GPRs (eGPRs)
+ */
+ PERF_REG_X86_R16,
+ PERF_REG_X86_R17,
+ PERF_REG_X86_R18,
+ PERF_REG_X86_R19,
+ PERF_REG_X86_R20,
+ PERF_REG_X86_R21,
+ PERF_REG_X86_R22,
+ PERF_REG_X86_R23,
+ PERF_REG_X86_R24,
+ PERF_REG_X86_R25,
+ PERF_REG_X86_R26,
+ PERF_REG_X86_R27,
+ PERF_REG_X86_R28,
+ PERF_REG_X86_R29,
+ PERF_REG_X86_R30,
+ PERF_REG_X86_R31,
/* These are the limits for the GPRs. */
PERF_REG_X86_32_MAX = PERF_REG_X86_GS + 1,
PERF_REG_X86_64_MAX = PERF_REG_X86_R15 + 1,
+ PERF_REG_MISC_MAX = PERF_REG_X86_R31 + 1,
/* These all need two bits set because they are 128bit */
PERF_REG_X86_XMM0 = 32,
@@ -54,6 +79,7 @@ enum perf_event_x86_regs {
};
#define PERF_REG_EXTENDED_MASK (~((1ULL << PERF_REG_X86_XMM0) - 1))
+#define PERF_X86_EGPRS_MASK GENMASK_ULL(PERF_REG_X86_R31, PERF_REG_X86_R16)
enum {
PERF_X86_SIMD_XMM_REGS = 16,
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index 9b3134220b3e..a34cc52dbbeb 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -61,14 +61,22 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
{
struct x86_perf_regs *perf_regs;
- if (idx >= PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) {
+ if (idx > PERF_REG_X86_R15) {
perf_regs = container_of(regs, struct x86_perf_regs, regs);
- /* SIMD registers are moved to dedicated sample_simd_vec_reg */
- if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD)
- return 0;
- if (!perf_regs->xmm_regs)
- return 0;
- return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0];
+
+ if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD) {
+ if (idx <= PERF_REG_X86_R31) {
+ if (!perf_regs->egpr_regs)
+ return 0;
+ return perf_regs->egpr_regs[idx - PERF_REG_X86_R16];
+ }
+ } else {
+ if (idx >= PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) {
+ if (!perf_regs->xmm_regs)
+ return 0;
+ return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0];
+ }
+ }
}
if (WARN_ON_ONCE(idx >= ARRAY_SIZE(pt_regs_offset)))
@@ -153,18 +161,12 @@ int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
return 0;
}
-#define PERF_REG_X86_RESERVED (((1ULL << PERF_REG_X86_XMM0) - 1) & \
- ~((1ULL << PERF_REG_X86_MAX) - 1))
+#define PERF_REG_X86_RESERVED (GENMASK_ULL(PERF_REG_X86_XMM0 - 1, PERF_REG_X86_AX) & \
+ ~GENMASK_ULL(PERF_REG_X86_R15, PERF_REG_X86_AX))
+#define PERF_REG_X86_EXT_RESERVED (~GENMASK_ULL(PERF_REG_MISC_MAX - 1, PERF_REG_X86_AX))
#ifdef CONFIG_X86_32
-#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_R8) | \
- (1ULL << PERF_REG_X86_R9) | \
- (1ULL << PERF_REG_X86_R10) | \
- (1ULL << PERF_REG_X86_R11) | \
- (1ULL << PERF_REG_X86_R12) | \
- (1ULL << PERF_REG_X86_R13) | \
- (1ULL << PERF_REG_X86_R14) | \
- (1ULL << PERF_REG_X86_R15))
+#define REG_NOSUPPORT GENMASK_ULL(PERF_REG_X86_R15, PERF_REG_X86_R8)
int perf_reg_validate(u64 mask, bool simd_enabled)
{
@@ -187,8 +189,13 @@ u64 perf_reg_abi(struct task_struct *task)
int perf_reg_validate(u64 mask, bool simd_enabled)
{
+ if (!simd_enabled &&
+ (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))))
+ return -EINVAL;
+
/* The mask could be 0 if only the SIMD registers are interested */
- if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))
+ if (simd_enabled &&
+ (mask & (REG_NOSUPPORT | PERF_REG_X86_EXT_RESERVED)))
return -EINVAL;
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 20/24] perf/x86: Enable SSP sampling using sample_regs_* fields
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (18 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-25 9:25 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
` (5 subsequent siblings)
25 siblings, 1 reply; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
This patch enables sampling of CET SSP register via the sample_regs_*
fields.
To sample SSP, the sample_simd_regs_enabled field must be set. This
allows the spare space (reclaimed from the original XMM space) in the
sample_regs_* fields to be used for representing SSP.
Similar with eGPRs sampling, the perf_reg_value() function needs to
check if the PERF_SAMPLE_REGS_ABI_SIMD flag is set first, and then
determine whether to output SSP or legacy XMM registers to userspace.
Additionally, arch-PEBS supports sampling SSP, which is placed into the
GPRs group. This patch also enables arch-PEBS-based SSP sampling.
Currently, SSP sampling is only supported on the x86_64 architecture, as
CET is only available on x86_64 platforms.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/core.c | 9 +++++++++
arch/x86/events/intel/ds.c | 8 ++++++++
arch/x86/events/perf_event.h | 10 ++++++++++
arch/x86/include/asm/perf_event.h | 4 ++++
arch/x86/include/uapi/asm/perf_regs.h | 7 ++++---
arch/x86/kernel/perf_regs.c | 5 +++++
6 files changed, 40 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index d33cfbe38573..ea451b48b9d6 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -712,6 +712,10 @@ int x86_pmu_hw_config(struct perf_event *event)
if (event_needs_egprs(event) &&
!(x86_pmu.ext_regs_mask & XFEATURE_MASK_APX))
return -EINVAL;
+ if (event_needs_ssp(event) &&
+ !(x86_pmu.ext_regs_mask & XFEATURE_MASK_CET_USER))
+ return -EINVAL;
+
/* Not require any vector registers but set width */
if (event->attr.sample_simd_vec_reg_qwords &&
!event->attr.sample_simd_vec_reg_intr &&
@@ -1871,6 +1875,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
perf_regs->h16zmm_regs = NULL;
perf_regs->opmask_regs = NULL;
perf_regs->egpr_regs = NULL;
+ perf_regs->cet_regs = NULL;
}
static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
@@ -1896,6 +1901,8 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
perf_regs->opmask = get_xsave_addr(xsave, XFEATURE_OPMASK);
if (mask & XFEATURE_MASK_APX)
perf_regs->egpr = get_xsave_addr(xsave, XFEATURE_APX);
+ if (mask & XFEATURE_MASK_CET_USER)
+ perf_regs->cet = get_xsave_addr(xsave, XFEATURE_CET_USER);
}
/*
@@ -1961,6 +1968,8 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
mask |= XFEATURE_MASK_OPMASK;
if (event_needs_egprs(event))
mask |= XFEATURE_MASK_APX;
+ if (event_needs_ssp(event))
+ mask |= XFEATURE_MASK_CET_USER;
mask &= x86_pmu.ext_regs_mask;
if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index ac9a1c2f0177..3a2fb623e0ab 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2685,6 +2685,14 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
__setup_pebs_gpr_group(event, data, regs,
(struct pebs_gprs *)gprs,
sample_type);
+
+ /* Currently only user space mode enables SSP. */
+ if (user_mode(regs) && (sample_type &
+ (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))) {
+ /* Point to r15 so that cet_regs[1] = ssp. */
+ perf_regs->cet_regs = &gprs->r15;
+ ignore_mask = XFEATURE_MASK_CET_USER;
+ }
}
if (header->aux) {
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 0974fd8b0e20..36688d28407f 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -197,6 +197,16 @@ static inline bool event_needs_egprs(struct perf_event *event)
return false;
}
+static inline bool event_needs_ssp(struct perf_event *event)
+{
+ if (event->attr.sample_simd_regs_enabled &&
+ (event->attr.sample_regs_user & BIT_ULL(PERF_REG_X86_SSP) ||
+ event->attr.sample_regs_intr & BIT_ULL(PERF_REG_X86_SSP)))
+ return true;
+
+ return false;
+}
+
struct amd_nb {
int nb_id; /* NorthBridge id */
int refcnt; /* reference count */
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index a54ea8fa6a04..0c6d58e6c98f 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -751,6 +751,10 @@ struct x86_perf_regs {
u64 *egpr_regs;
struct apx_state *egpr;
};
+ union {
+ u64 *cet_regs;
+ struct cet_user_state *cet;
+ };
};
extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
index e721a47556d4..98a5b6c8e24c 100644
--- a/arch/x86/include/uapi/asm/perf_regs.h
+++ b/arch/x86/include/uapi/asm/perf_regs.h
@@ -28,10 +28,10 @@ enum perf_event_x86_regs {
PERF_REG_X86_R14,
PERF_REG_X86_R15,
/*
- * The eGPRs and XMM have overlaps. Only one can be used
+ * The eGPRs/SSP and XMM have overlaps. Only one can be used
* at a time. The ABI PERF_SAMPLE_REGS_ABI_SIMD is used to
* distinguish which one is used. If PERF_SAMPLE_REGS_ABI_SIMD
- * is set, then eGPRs is used, otherwise, XMM is used.
+ * is set, then eGPRs/SSP is used, otherwise, XMM is used.
*
* Extended GPRs (eGPRs)
*/
@@ -51,10 +51,11 @@ enum perf_event_x86_regs {
PERF_REG_X86_R29,
PERF_REG_X86_R30,
PERF_REG_X86_R31,
+ PERF_REG_X86_SSP,
/* These are the limits for the GPRs. */
PERF_REG_X86_32_MAX = PERF_REG_X86_GS + 1,
PERF_REG_X86_64_MAX = PERF_REG_X86_R15 + 1,
- PERF_REG_MISC_MAX = PERF_REG_X86_R31 + 1,
+ PERF_REG_MISC_MAX = PERF_REG_X86_SSP + 1,
/* These all need two bits set because they are 128bit */
PERF_REG_X86_XMM0 = 32,
diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
index a34cc52dbbeb..9715d1f90313 100644
--- a/arch/x86/kernel/perf_regs.c
+++ b/arch/x86/kernel/perf_regs.c
@@ -70,6 +70,11 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return 0;
return perf_regs->egpr_regs[idx - PERF_REG_X86_R16];
}
+ if (idx == PERF_REG_X86_SSP) {
+ if (!perf_regs->cet_regs)
+ return 0;
+ return perf_regs->cet_regs[1];
+ }
} else {
if (idx >= PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) {
if (!perf_regs->xmm_regs)
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (19 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
` (4 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang, Dapeng Mi
From: Kan Liang <kan.liang@linux.intel.com>
Enable the PERF_PMU_CAP_SIMD_REGS capability if XSAVES support is
available for YMM, ZMM, OPMASK, eGPRs, or SSP.
Temporarily disable large PEBS sampling for these registers, as the
current arch-PEBS sampling code does not support them yet. Large PEBS
sampling for these registers will be enabled in subsequent patches.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 52 ++++++++++++++++++++++++++++++++----
1 file changed, 47 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 5772dcc3bcbd..0a32a0367647 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4424,11 +4424,33 @@ static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event)
flags &= ~PERF_SAMPLE_TIME;
if (!event->attr.exclude_kernel)
flags &= ~PERF_SAMPLE_REGS_USER;
- if (event->attr.sample_regs_user & ~PEBS_GP_REGS)
- flags &= ~PERF_SAMPLE_REGS_USER;
- if (event->attr.sample_regs_intr &
- ~(PEBS_GP_REGS | PERF_REG_EXTENDED_MASK))
- flags &= ~PERF_SAMPLE_REGS_INTR;
+ if (event->attr.sample_simd_regs_enabled) {
+ u64 nolarge = PERF_X86_EGPRS_MASK | BIT_ULL(PERF_REG_X86_SSP);
+
+ /*
+ * PEBS HW can only collect the XMM0-XMM15 for now.
+ * Disable large PEBS for other vector registers, predicate
+ * registers, eGPRs, and SSP.
+ */
+ if (event->attr.sample_regs_user & nolarge ||
+ fls64(event->attr.sample_simd_vec_reg_user) > PERF_X86_H16ZMM_BASE ||
+ event->attr.sample_simd_pred_reg_user)
+ flags &= ~PERF_SAMPLE_REGS_USER;
+
+ if (event->attr.sample_regs_intr & nolarge ||
+ fls64(event->attr.sample_simd_vec_reg_intr) > PERF_X86_H16ZMM_BASE ||
+ event->attr.sample_simd_pred_reg_intr)
+ flags &= ~PERF_SAMPLE_REGS_INTR;
+
+ if (event->attr.sample_simd_vec_reg_qwords > PERF_X86_XMM_QWORDS)
+ flags &= ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR);
+ } else {
+ if (event->attr.sample_regs_user & ~PEBS_GP_REGS)
+ flags &= ~PERF_SAMPLE_REGS_USER;
+ if (event->attr.sample_regs_intr &
+ ~(PEBS_GP_REGS | PERF_REG_EXTENDED_MASK))
+ flags &= ~PERF_SAMPLE_REGS_INTR;
+ }
return flags;
}
@@ -5910,6 +5932,26 @@ static void intel_extended_regs_init(struct pmu *pmu)
x86_pmu.ext_regs_mask |= XFEATURE_MASK_SSE;
dest_pmu->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
+
+ if (boot_cpu_has(X86_FEATURE_AVX) &&
+ cpu_has_xfeatures(XFEATURE_MASK_YMM, NULL))
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_YMM;
+ if (boot_cpu_has(X86_FEATURE_APX) &&
+ cpu_has_xfeatures(XFEATURE_MASK_APX, NULL))
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_APX;
+ if (boot_cpu_has(X86_FEATURE_AVX512F)) {
+ if (cpu_has_xfeatures(XFEATURE_MASK_OPMASK, NULL))
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_OPMASK;
+ if (cpu_has_xfeatures(XFEATURE_MASK_ZMM_Hi256, NULL))
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_ZMM_Hi256;
+ if (cpu_has_xfeatures(XFEATURE_MASK_Hi16_ZMM, NULL))
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_Hi16_ZMM;
+ }
+ if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
+ x86_pmu.ext_regs_mask |= XFEATURE_MASK_CET_USER;
+
+ if (x86_pmu.ext_regs_mask != XFEATURE_MASK_SSE)
+ dest_pmu->capabilities |= PERF_PMU_CAP_SIMD_REGS;
}
#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX_FIXED))
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (20 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
` (3 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
This patch enables arch-PEBS based SIMD/eGPRs/SSP registers sampling.
Arch-PEBS supports sampling of these registers, with all except SSP
placed into the XSAVE-Enabled Registers (XER) group with the layout
described below.
Field Name Registers Used Size
XSTATE_BV XINUSE for groups 8 B
Reserved Reserved 8 B
SSER XMM0-XMM15 16 regs * 16 B = 256 B
YMMHIR Upper 128 bits of YMM0-YMM15 16 regs * 16 B = 256 B
EGPR R16-R31 16 regs * 8 B = 128 B
OPMASKR K0-K7 8 regs * 8 B = 64 B
ZMMHIR Upper 256 bits of ZMM0-ZMM15 16 regs * 32 B = 512 B
Hi16ZMMR ZMM16-ZMM31 16 regs * 64 B = 1024 B
Memory space in the output buffer is allocated for these sub-groups as
long as the corresponding Format.XER[55:49] bits in the PEBS record
header are set. However, the arch-PEBS hardware engine does not write
the sub-group if it is not used (in INIT state). In such cases, the
corresponding bit in the XSTATE_BV bitmap is set to 0. Therefore, the
XSTATE_BV field is checked to determine if the register data is actually
written for each PEBS record. If not, the register data is not outputted
to userspace.
The SSP register is sampled and placed into the GPRs group by arch-PEBS.
Additionally, the MSRs IA32_PMC_{GPn|FXm}_CFG_C.[55:49] bits are used to
manage which types of these registers need to be sampled.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 75 ++++++++++++++++++++++--------
arch/x86/events/intel/ds.c | 77 ++++++++++++++++++++++++++++---
arch/x86/include/asm/msr-index.h | 7 +++
arch/x86/include/asm/perf_event.h | 8 +++-
4 files changed, 142 insertions(+), 25 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 0a32a0367647..e0dd57906bca 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3221,6 +3221,21 @@ static void intel_pmu_enable_event_ext(struct perf_event *event)
if (pebs_data_cfg & PEBS_DATACFG_XMMS)
ext |= ARCH_PEBS_VECR_XMM & cap.caps;
+ if (pebs_data_cfg & PEBS_DATACFG_YMMHS)
+ ext |= ARCH_PEBS_VECR_YMMH & cap.caps;
+
+ if (pebs_data_cfg & PEBS_DATACFG_EGPRS)
+ ext |= ARCH_PEBS_VECR_EGPRS & cap.caps;
+
+ if (pebs_data_cfg & PEBS_DATACFG_OPMASKS)
+ ext |= ARCH_PEBS_VECR_OPMASK & cap.caps;
+
+ if (pebs_data_cfg & PEBS_DATACFG_ZMMHS)
+ ext |= ARCH_PEBS_VECR_ZMMH & cap.caps;
+
+ if (pebs_data_cfg & PEBS_DATACFG_H16ZMMS)
+ ext |= ARCH_PEBS_VECR_H16ZMM & cap.caps;
+
if (pebs_data_cfg & PEBS_DATACFG_LBRS)
ext |= ARCH_PEBS_LBR & cap.caps;
@@ -4416,6 +4431,34 @@ static void intel_pebs_aliases_skl(struct perf_event *event)
return intel_pebs_aliases_precdist(event);
}
+static inline bool intel_pebs_support_regs(struct perf_event *event, u64 regs)
+{
+ struct arch_pebs_cap cap = hybrid(event->pmu, arch_pebs_cap);
+ int pebs_format = x86_pmu.intel_cap.pebs_format;
+ bool supported = true;
+
+ /* SSP */
+ if (regs & PEBS_DATACFG_GP)
+ supported &= x86_pmu.arch_pebs && (ARCH_PEBS_GPR & cap.caps);
+ if (regs & PEBS_DATACFG_XMMS) {
+ supported &= x86_pmu.arch_pebs ?
+ ARCH_PEBS_VECR_XMM & cap.caps :
+ pebs_format > 3 && x86_pmu.intel_cap.pebs_baseline;
+ }
+ if (regs & PEBS_DATACFG_YMMHS)
+ supported &= x86_pmu.arch_pebs && (ARCH_PEBS_VECR_YMMH & cap.caps);
+ if (regs & PEBS_DATACFG_EGPRS)
+ supported &= x86_pmu.arch_pebs && (ARCH_PEBS_VECR_EGPRS & cap.caps);
+ if (regs & PEBS_DATACFG_OPMASKS)
+ supported &= x86_pmu.arch_pebs && (ARCH_PEBS_VECR_OPMASK & cap.caps);
+ if (regs & PEBS_DATACFG_ZMMHS)
+ supported &= x86_pmu.arch_pebs && (ARCH_PEBS_VECR_ZMMH & cap.caps);
+ if (regs & PEBS_DATACFG_H16ZMMS)
+ supported &= x86_pmu.arch_pebs && (ARCH_PEBS_VECR_H16ZMM & cap.caps);
+
+ return supported;
+}
+
static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event)
{
unsigned long flags = x86_pmu.large_pebs_flags;
@@ -4425,24 +4468,20 @@ static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event)
if (!event->attr.exclude_kernel)
flags &= ~PERF_SAMPLE_REGS_USER;
if (event->attr.sample_simd_regs_enabled) {
- u64 nolarge = PERF_X86_EGPRS_MASK | BIT_ULL(PERF_REG_X86_SSP);
-
- /*
- * PEBS HW can only collect the XMM0-XMM15 for now.
- * Disable large PEBS for other vector registers, predicate
- * registers, eGPRs, and SSP.
- */
- if (event->attr.sample_regs_user & nolarge ||
- fls64(event->attr.sample_simd_vec_reg_user) > PERF_X86_H16ZMM_BASE ||
- event->attr.sample_simd_pred_reg_user)
- flags &= ~PERF_SAMPLE_REGS_USER;
-
- if (event->attr.sample_regs_intr & nolarge ||
- fls64(event->attr.sample_simd_vec_reg_intr) > PERF_X86_H16ZMM_BASE ||
- event->attr.sample_simd_pred_reg_intr)
- flags &= ~PERF_SAMPLE_REGS_INTR;
-
- if (event->attr.sample_simd_vec_reg_qwords > PERF_X86_XMM_QWORDS)
+ if ((event_needs_ssp(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_GP)) ||
+ (event_needs_xmm(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_XMMS)) ||
+ (event_needs_ymm(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_YMMHS)) ||
+ (event_needs_egprs(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_EGPRS)) ||
+ (event_needs_opmask(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_OPMASKS)) ||
+ (event_needs_low16_zmm(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_ZMMHS)) ||
+ (event_needs_high16_zmm(event) &&
+ !intel_pebs_support_regs(event, PEBS_DATACFG_H16ZMMS)))
flags &= ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR);
} else {
if (event->attr.sample_regs_user & ~PEBS_GP_REGS)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 3a2fb623e0ab..4743bdfb4ed4 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1740,11 +1740,22 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
((attr->config & INTEL_ARCH_EVENT_MASK) ==
x86_pmu.rtm_abort_event);
- if (gprs || (attr->precise_ip < 2) || tsx_weight)
+ if (gprs || (attr->precise_ip < 2) ||
+ tsx_weight || event_needs_ssp(event))
pebs_data_cfg |= PEBS_DATACFG_GP;
if (event_needs_xmm(event))
pebs_data_cfg |= PEBS_DATACFG_XMMS;
+ if (event_needs_ymm(event))
+ pebs_data_cfg |= PEBS_DATACFG_YMMHS;
+ if (event_needs_low16_zmm(event))
+ pebs_data_cfg |= PEBS_DATACFG_ZMMHS;
+ if (event_needs_high16_zmm(event))
+ pebs_data_cfg |= PEBS_DATACFG_H16ZMMS;
+ if (event_needs_opmask(event))
+ pebs_data_cfg |= PEBS_DATACFG_OPMASKS;
+ if (event_needs_egprs(event))
+ pebs_data_cfg |= PEBS_DATACFG_EGPRS;
if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
/*
@@ -2705,15 +2716,69 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
meminfo->tsx_tuning, ax);
}
- if (header->xmm) {
+ if (header->xmm || header->ymmh || header->egpr ||
+ header->opmask || header->zmmh || header->h16zmm) {
+ struct arch_pebs_xer_header *xer_header = next_record;
struct pebs_xmm *xmm;
+ struct ymmh_struct *ymmh;
+ struct avx_512_zmm_uppers_state *zmmh;
+ struct avx_512_hi16_state *h16zmm;
+ struct avx_512_opmask_state *opmask;
+ struct apx_state *egpr;
next_record += sizeof(struct arch_pebs_xer_header);
- ignore_mask |= XFEATURE_MASK_SSE;
- xmm = next_record;
- perf_regs->xmm_regs = xmm->xmm;
- next_record = xmm + 1;
+ if (header->xmm) {
+ ignore_mask |= XFEATURE_MASK_SSE;
+ xmm = next_record;
+ /*
+ * Only output XMM regs to user space when arch-PEBS
+ * really writes data into xstate area.
+ */
+ if (xer_header->xstate & XFEATURE_MASK_SSE)
+ perf_regs->xmm_regs = xmm->xmm;
+ next_record = xmm + 1;
+ }
+
+ if (header->ymmh) {
+ ignore_mask |= XFEATURE_MASK_YMM;
+ ymmh = next_record;
+ if (xer_header->xstate & XFEATURE_MASK_YMM)
+ perf_regs->ymmh = ymmh;
+ next_record = ymmh + 1;
+ }
+
+ if (header->egpr) {
+ ignore_mask |= XFEATURE_MASK_APX;
+ egpr = next_record;
+ if (xer_header->xstate & XFEATURE_MASK_APX)
+ perf_regs->egpr = egpr;
+ next_record = egpr + 1;
+ }
+
+ if (header->opmask) {
+ ignore_mask |= XFEATURE_MASK_OPMASK;
+ opmask = next_record;
+ if (xer_header->xstate & XFEATURE_MASK_OPMASK)
+ perf_regs->opmask = opmask;
+ next_record = opmask + 1;
+ }
+
+ if (header->zmmh) {
+ ignore_mask |= XFEATURE_MASK_ZMM_Hi256;
+ zmmh = next_record;
+ if (xer_header->xstate & XFEATURE_MASK_ZMM_Hi256)
+ perf_regs->zmmh = zmmh;
+ next_record = zmmh + 1;
+ }
+
+ if (header->h16zmm) {
+ ignore_mask |= XFEATURE_MASK_Hi16_ZMM;
+ h16zmm = next_record;
+ if (xer_header->xstate & XFEATURE_MASK_Hi16_ZMM)
+ perf_regs->h16zmm = h16zmm;
+ next_record = h16zmm + 1;
+ }
}
if (header->lbr) {
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index e25434d21159..4fe796993c97 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -350,6 +350,13 @@
#define ARCH_PEBS_LBR_SHIFT 40
#define ARCH_PEBS_LBR (0x3ull << ARCH_PEBS_LBR_SHIFT)
#define ARCH_PEBS_VECR_XMM BIT_ULL(49)
+#define ARCH_PEBS_VECR_YMMH BIT_ULL(50)
+#define ARCH_PEBS_VECR_EGPRS BIT_ULL(51)
+#define ARCH_PEBS_VECR_OPMASK BIT_ULL(53)
+#define ARCH_PEBS_VECR_ZMMH BIT_ULL(54)
+#define ARCH_PEBS_VECR_H16ZMM BIT_ULL(55)
+#define ARCH_PEBS_VECR_EXT_SHIFT 50
+#define ARCH_PEBS_VECR_EXT (0x3full << ARCH_PEBS_VECR_EXT_SHIFT)
#define ARCH_PEBS_GPR BIT_ULL(61)
#define ARCH_PEBS_AUX BIT_ULL(62)
#define ARCH_PEBS_EN BIT_ULL(63)
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 0c6d58e6c98f..db8bba43401c 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -148,6 +148,11 @@
#define PEBS_DATACFG_LBRS BIT_ULL(3)
#define PEBS_DATACFG_CNTR BIT_ULL(4)
#define PEBS_DATACFG_METRICS BIT_ULL(5)
+#define PEBS_DATACFG_YMMHS BIT_ULL(6)
+#define PEBS_DATACFG_OPMASKS BIT_ULL(7)
+#define PEBS_DATACFG_ZMMHS BIT_ULL(8)
+#define PEBS_DATACFG_H16ZMMS BIT_ULL(9)
+#define PEBS_DATACFG_EGPRS BIT_ULL(10)
#define PEBS_DATACFG_LBR_SHIFT 24
#define PEBS_DATACFG_CNTR_SHIFT 32
#define PEBS_DATACFG_CNTR_MASK GENMASK_ULL(15, 0)
@@ -545,7 +550,8 @@ struct arch_pebs_header {
rsvd3:7,
xmm:1,
ymmh:1,
- rsvd4:2,
+ egpr:1,
+ rsvd4:1,
opmask:1,
zmmh:1,
h16zmm:1,
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (21 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 0:41 ` [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size Dapeng Mi
` (2 subsequent siblings)
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
When two or more identical PEBS events with the same sampling period are
programmed on a mix of PDIST and non-PDIST counters, multiple
back-to-back NMIs can be triggered.
The Linux PMI handler processes the first NMI and clears the
GLOBAL_STATUS MSR. If a second NMI is triggered immediately after
the first, it is recognized as a "suspicious NMI" because no bits are set
in the GLOBAL_STATUS MSR (cleared by the first NMI).
This issue does not lead to PEBS data corruption or data loss, but it
does result in an annoying warning message.
The current NMI handler supports back-to-back NMI detection, but it
requires the PMI handler to return the count of actually processed events,
which the PEBS handler does not currently do.
This patch modifies the PEBS handlers to return the count of actually
processed events, thereby activating back-to-back NMI detection and
avoiding the "suspicious NMI" warning.
Suggested-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 6 ++----
arch/x86/events/intel/ds.c | 40 ++++++++++++++++++++++++------------
arch/x86/events/perf_event.h | 2 +-
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index e0dd57906bca..9da0a1354045 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3558,9 +3558,8 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
if (__test_and_clear_bit(GLOBAL_STATUS_BUFFER_OVF_BIT, (unsigned long *)&status)) {
u64 pebs_enabled = cpuc->pebs_enabled;
- handled++;
x86_pmu_handle_guest_pebs(regs, &data);
- static_call(x86_pmu_drain_pebs)(regs, &data);
+ handled += static_call(x86_pmu_drain_pebs)(regs, &data);
/*
* PMI throttle may be triggered, which stops the PEBS event.
@@ -3587,8 +3586,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
*/
if (__test_and_clear_bit(GLOBAL_STATUS_ARCH_PEBS_THRESHOLD_BIT,
(unsigned long *)&status)) {
- handled++;
- static_call(x86_pmu_drain_pebs)(regs, &data);
+ handled += static_call(x86_pmu_drain_pebs)(regs, &data);
if (cpuc->events[INTEL_PMC_IDX_FIXED_SLOTS] &&
is_pebs_counter_event_group(cpuc->events[INTEL_PMC_IDX_FIXED_SLOTS]))
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 4743bdfb4ed4..6e1c516122c0 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -3035,7 +3035,7 @@ __intel_pmu_pebs_events(struct perf_event *event,
__intel_pmu_pebs_last_event(event, iregs, regs, data, at, count, setup_sample);
}
-static void intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_data *data)
+static int intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_data *data)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct debug_store *ds = cpuc->ds;
@@ -3044,7 +3044,7 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_
int n;
if (!x86_pmu.pebs_active)
- return;
+ return 0;
at = (struct pebs_record_core *)(unsigned long)ds->pebs_buffer_base;
top = (struct pebs_record_core *)(unsigned long)ds->pebs_index;
@@ -3055,22 +3055,24 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_
ds->pebs_index = ds->pebs_buffer_base;
if (!test_bit(0, cpuc->active_mask))
- return;
+ return 0;
WARN_ON_ONCE(!event);
if (!event->attr.precise_ip)
- return;
+ return 0;
n = top - at;
if (n <= 0) {
if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)
intel_pmu_save_and_restart_reload(event, 0);
- return;
+ return 0;
}
__intel_pmu_pebs_events(event, iregs, data, at, top, 0, n,
setup_pebs_fixed_sample_data);
+
+ return 1; /* PMC0 only*/
}
static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, u64 mask)
@@ -3093,7 +3095,7 @@ static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, u64
}
}
-static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_data *data)
+static int intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_data *data)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct debug_store *ds = cpuc->ds;
@@ -3102,11 +3104,12 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
short error[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
int max_pebs_events = intel_pmu_max_num_pebs(NULL);
+ u64 events_bitmap = 0;
int bit, i, size;
u64 mask;
if (!x86_pmu.pebs_active)
- return;
+ return 0;
base = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base;
top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index;
@@ -3122,7 +3125,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
if (unlikely(base >= top)) {
intel_pmu_pebs_event_update_no_drain(cpuc, mask);
- return;
+ return 0;
}
for (at = base; at < top; at += x86_pmu.pebs_record_size) {
@@ -3186,6 +3189,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
if ((counts[bit] == 0) && (error[bit] == 0))
continue;
+ events_bitmap |= bit;
event = cpuc->events[bit];
if (WARN_ON_ONCE(!event))
continue;
@@ -3207,6 +3211,8 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
setup_pebs_fixed_sample_data);
}
}
+
+ return hweight64(events_bitmap);
}
static __always_inline void
@@ -3262,7 +3268,7 @@ __intel_pmu_handle_last_pebs_record(struct pt_regs *iregs,
static DEFINE_PER_CPU(struct x86_perf_regs, x86_pebs_regs);
-static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_data *data)
+static int intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_data *data)
{
short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
void *last[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS];
@@ -3272,10 +3278,11 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
struct pt_regs *regs = &perf_regs->regs;
struct pebs_basic *basic;
void *base, *at, *top;
+ u64 events_bitmap = 0;
u64 mask;
if (!x86_pmu.pebs_active)
- return;
+ return 0;
base = (struct pebs_basic *)(unsigned long)ds->pebs_buffer_base;
top = (struct pebs_basic *)(unsigned long)ds->pebs_index;
@@ -3288,7 +3295,7 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
if (unlikely(base >= top)) {
intel_pmu_pebs_event_update_no_drain(cpuc, mask);
- return;
+ return 0;
}
if (!iregs)
@@ -3303,6 +3310,7 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
continue;
pebs_status = mask & basic->applicable_counters;
+ events_bitmap |= pebs_status;
__intel_pmu_handle_pebs_record(iregs, regs, data, at,
pebs_status, counts, last,
setup_pebs_adaptive_sample_data);
@@ -3310,9 +3318,11 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
__intel_pmu_handle_last_pebs_record(iregs, regs, data, mask, counts, last,
setup_pebs_adaptive_sample_data);
+
+ return hweight64(events_bitmap);
}
-static void intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
+static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
struct perf_sample_data *data)
{
short counts[INTEL_PMC_IDX_FIXED + MAX_FIXED_PEBS_EVENTS] = {};
@@ -3322,13 +3332,14 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
struct x86_perf_regs *perf_regs = this_cpu_ptr(&x86_pebs_regs);
struct pt_regs *regs = &perf_regs->regs;
void *base, *at, *top;
+ u64 events_bitmap = 0;
u64 mask;
rdmsrq(MSR_IA32_PEBS_INDEX, index.whole);
if (unlikely(!index.wr)) {
intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX);
- return;
+ return 0;
}
base = cpuc->pebs_vaddr;
@@ -3367,6 +3378,7 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
basic = at + sizeof(struct arch_pebs_header);
pebs_status = mask & basic->applicable_counters;
+ events_bitmap |= pebs_status;
__intel_pmu_handle_pebs_record(iregs, regs, data, at,
pebs_status, counts, last,
setup_arch_pebs_sample_data);
@@ -3386,6 +3398,8 @@ static void intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
__intel_pmu_handle_last_pebs_record(iregs, regs, data, mask,
counts, last,
setup_arch_pebs_sample_data);
+
+ return hweight64(events_bitmap);
}
static void __init intel_arch_pebs_init(void)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 36688d28407f..e6bf786728eb 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1014,7 +1014,7 @@ struct x86_pmu {
int pebs_record_size;
int pebs_buffer_size;
u64 pebs_events_mask;
- void (*drain_pebs)(struct pt_regs *regs, struct perf_sample_data *data);
+ int (*drain_pebs)(struct pt_regs *regs, struct perf_sample_data *data);
struct event_constraint *pebs_constraints;
void (*pebs_aliases)(struct perf_event *event);
u64 (*pebs_latency_data)(struct perf_event *event, u64 status);
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (22 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
@ 2026-03-24 0:41 ` Dapeng Mi
2026-03-24 1:08 ` [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
2026-03-25 9:41 ` Mi, Dapeng
25 siblings, 0 replies; 33+ messages in thread
From: Dapeng Mi @ 2026-03-24 0:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Dapeng Mi
Prevent potential infinite loops by adding a sanity check for the
corrupted PEBS fragment sizes which could happen in theory.
If a corrupted PEBS fragment is detected, the entire PEBS record
including the fragment and all subsequent records will be discarded.
This ensures the integrity of PEBS data and prevents infinite loops
in setup_arch_pebs_sample_data() again.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
V7: new patch.
arch/x86/events/intel/ds.c | 33 +++++++++++++++++++++++----------
1 file changed, 23 insertions(+), 10 deletions(-)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 6e1c516122c0..4b0dd8379737 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2819,7 +2819,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
}
/* Parse followed fragments if there are. */
- if (arch_pebs_record_continued(header)) {
+ if (arch_pebs_record_continued(header) && header->size) {
at = at + header->size;
goto again;
}
@@ -2948,13 +2948,17 @@ __intel_pmu_pebs_last_event(struct perf_event *event,
struct pt_regs *iregs,
struct pt_regs *regs,
struct perf_sample_data *data,
- void *at,
- int count,
+ void *at, int count, bool corrupted,
setup_fn setup_sample)
{
struct hw_perf_event *hwc = &event->hw;
- setup_sample(event, iregs, at, data, regs);
+ /* Skip parsing corrupted PEBS record. */
+ if (corrupted)
+ perf_sample_data_init(data, 0, event->hw.last_period);
+ else
+ setup_sample(event, iregs, at, data, regs);
+
if (iregs == &dummy_iregs) {
/*
* The PEBS records may be drained in the non-overflow context,
@@ -3026,13 +3030,15 @@ __intel_pmu_pebs_events(struct perf_event *event,
iregs = &dummy_iregs;
while (cnt > 1) {
- __intel_pmu_pebs_event(event, iregs, regs, data, at, setup_sample);
+ __intel_pmu_pebs_event(event, iregs, regs, data,
+ at, setup_sample);
at += cpuc->pebs_record_size;
at = get_next_pebs_record_by_bit(at, top, bit);
cnt--;
}
- __intel_pmu_pebs_last_event(event, iregs, regs, data, at, count, setup_sample);
+ __intel_pmu_pebs_last_event(event, iregs, regs, data, at,
+ count, false, setup_sample);
}
static int intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_data *data)
@@ -3247,7 +3253,8 @@ static __always_inline void
__intel_pmu_handle_last_pebs_record(struct pt_regs *iregs,
struct pt_regs *regs,
struct perf_sample_data *data,
- u64 mask, short *counts, void **last,
+ u64 mask, short *counts,
+ void **last, bool corrupted,
setup_fn setup_sample)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
@@ -3261,7 +3268,7 @@ __intel_pmu_handle_last_pebs_record(struct pt_regs *iregs,
event = cpuc->events[bit];
__intel_pmu_pebs_last_event(event, iregs, regs, data, last[bit],
- counts[bit], setup_sample);
+ counts[bit], corrupted, setup_sample);
}
}
@@ -3317,7 +3324,7 @@ static int intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_da
}
__intel_pmu_handle_last_pebs_record(iregs, regs, data, mask, counts, last,
- setup_pebs_adaptive_sample_data);
+ false, setup_pebs_adaptive_sample_data);
return hweight64(events_bitmap);
}
@@ -3333,6 +3340,7 @@ static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
struct pt_regs *regs = &perf_regs->regs;
void *base, *at, *top;
u64 events_bitmap = 0;
+ bool corrupted = false;
u64 mask;
rdmsrq(MSR_IA32_PEBS_INDEX, index.whole);
@@ -3388,6 +3396,10 @@ static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
if (!header->size)
break;
at += header->size;
+ if (WARN_ON_ONCE(at >= top)) {
+ corrupted = true;
+ goto done;
+ }
header = at;
}
@@ -3395,8 +3407,9 @@ static int intel_pmu_drain_arch_pebs(struct pt_regs *iregs,
at += header->size;
}
+done:
__intel_pmu_handle_last_pebs_record(iregs, regs, data, mask,
- counts, last,
+ counts, last, corrupted,
setup_arch_pebs_sample_data);
return hweight64(events_bitmap);
--
2.34.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (23 preceding siblings ...)
2026-03-24 0:41 ` [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size Dapeng Mi
@ 2026-03-24 1:08 ` Mi, Dapeng
2026-03-25 9:41 ` Mi, Dapeng
25 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-24 1:08 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao
Here are the corresponding perf-tools patch-set. Thanks.
https://lore.kernel.org/all/20260324005706.3778057-1-dapeng1.mi@linux.intel.com/
On 3/24/2026 8:40 AM, Dapeng Mi wrote:
> Changes since V6:
> - Fix potential overwritten issue in hybrid PMU structure (patch 01/24)
> - Restrict PEBS events work on GP counters if no PEBS baseline suggested
> (patch 02/24)
> - Use per-cpu x86_intr_regs for perf_event_nmi_handler() instead of
> temporary variable (patch 06/24)
> - Add helper update_fpu_state_and_flag() to ensure TIF_NEED_FPU_LOAD is
> set after save_fpregs_to_fpstate() call (patch 09/24)
> - Optimize and simplify x86_pmu_sample_xregs(), etc. (patch 11/24)
> - Add macro word_for_each_set_bit() to simplify u64 set-bit iteration
> (patch 13/24)
> - Add sanity check for PEBS fragment size (patch 24/24)
>
> Changes since V5:
> - Introduce 3 commits to fix newly found PEBS issues (Patch 01~03/19)
> - Address Peter comments, including,
> * Fully support user-regs sampling of the SIMD/eGPRs/SSP registers
> * Adjust newly added fields in perf_event_attr to avoid holes
> * Fix the endian issue introduced by for_each_set_bit() in
> event/core.c
> * Remove some unnecessary macros from UAPI header perf_regs.h
> * Enhance b2b NMI detection for all PEBS handlers to ensure identical
> behaviors of all PEBS handlers
> - Split perf-tools patches which would be posted in a separate patchset
> later
>
> Changes since V4:
> - Rewrite some functions comments and commit messages (Dave)
> - Add arch-PEBS based SIMD/eGPRs/SSP sampling support (Patch 15/19)
> - Fix "suspecious NMI" warnning observed on PTL/NVL P-core and DMR by
> activating back-to-back NMI detection mechanism (Patch 16/19)
> - Fix some minor issues on perf-tool patches (Patch 18/19)
>
> Changes since V3:
> - Drop the SIMD registers if an NMI hits kernel mode for REGS_USER.
> - Only dump the available regs, rather than zero and dump the
> unavailable regs. It's possible that the dumped registers are a subset
> of the requested registers.
> - Some minor updates to address Dapeng's comments in V3.
>
> Changes since V2:
> - Use the FPU format for the x86_pmu.ext_regs_mask as well
> - Add a check before invoking xsaves_nmi()
> - Add perf_simd_reg_check() to retrieve the number of available
> registers. If the kernel fails to get the requested registers, e.g.,
> XSAVES fails, nothing dumps to the userspace (the V2 dumps all 0s).
> - Add POC perf tool patches
>
> Changes since V1:
> - Apply the new interfaces to configure and dump the SIMD registers
> - Utilize the existing FPU functions, e.g., xstate_calculate_size,
> get_xsave_addr().
>
> Starting from Intel Ice Lake, XMM registers can be collected in a PEBS
> record. Future Architecture PEBS will include additional registers such
> as YMM, ZMM, OPMASK, SSP and APX eGPRs, contingent on hardware support.
>
> This patch set introduces a software solution to mitigate the hardware
> requirement by utilizing the XSAVES command to retrieve the requested
> registers in the overflow handler. This feature is no longer limited to
> PEBS events or specific platforms. While the hardware solution remains
> preferable due to its lower overhead and higher accuracy, this software
> approach provides a viable alternative.
>
> The solution is theoretically compatible with all x86 platforms but is
> currently enabled on newer platforms, including Sapphire Rapids and
> later P-core server platforms, Sierra Forest and later E-core server
> platforms and recent Client platforms, like Arrow Lake, Panther Lake and
> Nova Lake.
>
> Newly supported registers include YMM, ZMM, OPMASK, SSP, and APX eGPRs.
> Due to space constraints in sample_regs_user/intr, new fields have been
> introduced in the perf_event_attr structure to accommodate these
> registers.
>
> After a long discussion in V1,
> https://lore.kernel.org/lkml/3f1c9a9e-cb63-47ff-a5e9-06555fa6cc9a@linux.intel.com/
> The below new fields are introduced.
>
> @@ -547,6 +549,25 @@ struct perf_event_attr {
>
> __u64 config3; /* extension of config2 */
> __u64 config4; /* extension of config3 */
> +
> + /*
> + * Defines set of SIMD registers to dump on samples.
> + * The sample_simd_regs_enabled !=0 implies the
> + * set of SIMD registers is used to config all SIMD registers.
> + * If !sample_simd_regs_enabled, sample_regs_XXX may be used to
> + * config some SIMD registers on X86.
> + */
> + union {
> + __u16 sample_simd_regs_enabled;
> + __u16 sample_simd_pred_reg_qwords;
> + };
> + __u16 sample_simd_vec_reg_qwords;
> + __u32 __reserved_4;
> +
> + __u32 sample_simd_pred_reg_intr;
> + __u32 sample_simd_pred_reg_user;
> + __u64 sample_simd_vec_reg_intr;
> + __u64 sample_simd_vec_reg_user;
> };
>
> /*
> @@ -1020,7 +1041,15 @@ enum perf_event_type {
> * } && PERF_SAMPLE_BRANCH_STACK
> *
> * { u64 abi; # enum perf_sample_regs_abi
> - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
> + * u64 regs[weight(mask)];
> + * struct {
> + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_user)
> + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
> + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_user)
> + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
> + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
> + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + * } && PERF_SAMPLE_REGS_USER
> *
> * { u64 size;
> * char data[size];
> @@ -1047,7 +1076,15 @@ enum perf_event_type {
> * { u64 data_src; } && PERF_SAMPLE_DATA_SRC
> * { u64 transaction; } && PERF_SAMPLE_TRANSACTION
> * { u64 abi; # enum perf_sample_regs_abi
> - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
> + * u64 regs[weight(mask)];
> + * struct {
> + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_intr)
> + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
> + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_intr)
> + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
> + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
> + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + * } && PERF_SAMPLE_REGS_INTR
> * { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
> * { u64 cgroup;} && PERF_SAMPLE_CGROUP
> * { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
>
>
> To maintain simplicity, a single field, sample_{simd|pred}_vec_reg_qwords,
> is introduced to indicate register width. For example:
> - sample_simd_vec_reg_qwords = 2 for XMM registers (128 bits) on x86
> - sample_simd_vec_reg_qwords = 4 for YMM registers (256 bits) on x86
>
> Four additional fields, sample_{simd|pred}_vec_reg_{intr|user}, represent
> the bitmap of sampling registers. For instance, the bitmap for x86
> XMM registers is 0xffff (16 XMM registers). Although users can
> theoretically sample a subset of registers, the current perf-tool
> implementation supports sampling all registers of each type to avoid
> complexity.
>
> A new ABI, PERF_SAMPLE_REGS_ABI_SIMD, is introduced to signal user space
> tools about the presence of SIMD registers in sampling records. When this
> flag is detected, tools should recognize that extra SIMD register data
> follows the general register data. The layout of the extra SIMD register
> data is displayed as follow.
>
> u16 nr_vectors;
> u16 vector_qwords;
> u16 nr_pred;
> u16 pred_qwords;
> u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
>
> With this patch set, sampling for the aforementioned registers is
> supported on the Intel Nova Lake platform.
>
> Examples:
> $perf record -I?
> available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10
> R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28
> R29 R30 R31 SSP XMM0-15 YMM0-15 ZMM0-31 OPMASK0-7
>
> $perf record --user-regs=?
> available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10
> R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28
> R29 R30 R31 SSP XMM0-15 YMM0-15 ZMM0-31 OPMASK0-7
>
> $perf record -e branches:p -Iax,bx,r8,r16,r31,ssp,xmm,ymm,zmm,opmask -c 100000 ./test
> $perf report -D
>
> ... ...
> 14027761992115 0xcf30 [0x8a8]: PERF_RECORD_SAMPLE(IP, 0x1): 29964/29964:
> 0xffffffff9f085e24 period: 100000 addr: 0
> ... intr regs: mask 0x18001010003 ABI 64-bit
> .... AX 0xdffffc0000000000
> .... BX 0xffff8882297685e8
> .... R8 0x0000000000000000
> .... R16 0x0000000000000000
> .... R31 0x0000000000000000
> .... SSP 0x0000000000000000
> ... SIMD ABI nr_vectors 32 vector_qwords 8 nr_pred 8 pred_qwords 1
> .... ZMM [0] 0xffffffffffffffff
> .... ZMM [0] 0x0000000000000001
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [1] 0x003a6b6165506d56
> ... ...
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... OPMASK[0] 0x00000000fffffe00
> .... OPMASK[1] 0x0000000000ffffff
> .... OPMASK[2] 0x000000000000007f
> .... OPMASK[3] 0x0000000000000000
> .... OPMASK[4] 0x0000000000010080
> .... OPMASK[5] 0x0000000000000000
> .... OPMASK[6] 0x0000400004000000
> .... OPMASK[7] 0x0000000000000000
> ... ...
>
>
> History:
> v6: https://lore.kernel.org/all/20260209072047.2180332-1-dapeng1.mi@linux.intel.com/
> v5: https://lore.kernel.org/all/20251203065500.2597594-1-dapeng1.mi@linux.intel.com/
> v4: https://lore.kernel.org/all/20250925061213.178796-1-dapeng1.mi@linux.intel.com/
> v3: https://lore.kernel.org/lkml/20250815213435.1702022-1-kan.liang@linux.intel.com/
> v2: https://lore.kernel.org/lkml/20250626195610.405379-1-kan.liang@linux.intel.com/
> v1: https://lore.kernel.org/lkml/20250613134943.3186517-1-kan.liang@linux.intel.com/
>
> Dapeng Mi (12):
> perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu()
> perf/x86/intel: Avoid PEBS event on fixed counters without extended
> PEBS
> perf/x86/intel: Enable large PEBS sampling for XMMs
> perf/x86/intel: Convert x86_perf_regs to per-cpu variables
> perf: Eliminate duplicate arch-specific functions definations
> x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state
> perf/x86: Enable XMM Register Sampling for Non-PEBS Events
> perf/x86: Enable XMM register sampling for REGS_USER case
> perf: Enhance perf_reg_validate() with simd_enabled argument
> perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling
> perf/x86: Activate back-to-back NMI detection for arch-PEBS induced
> NMIs
> perf/x86/intel: Add sanity check for PEBS fragment size
>
> Kan Liang (12):
> perf/x86: Use x86_perf_regs in the x86 nmi handler
> perf/x86: Introduce x86-specific x86_pmu_setup_regs_data()
> x86/fpu/xstate: Add xsaves_nmi() helper
> perf: Move and rename has_extended_regs() for ARCH-specific use
> perf: Add sampling support for SIMD registers
> perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields
> perf/x86: Enable YMM sampling using sample_simd_vec_reg_* fields
> perf/x86: Enable ZMM sampling using sample_simd_vec_reg_* fields
> perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields
> perf/x86: Enable eGPRs sampling using sample_regs_* fields
> perf/x86: Enable SSP sampling using sample_regs_* fields
> perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability
>
> arch/arm/kernel/perf_regs.c | 8 +-
> arch/arm64/kernel/perf_regs.c | 8 +-
> arch/csky/kernel/perf_regs.c | 8 +-
> arch/loongarch/kernel/perf_regs.c | 8 +-
> arch/mips/kernel/perf_regs.c | 8 +-
> arch/parisc/kernel/perf_regs.c | 8 +-
> arch/powerpc/perf/perf_regs.c | 2 +-
> arch/riscv/kernel/perf_regs.c | 8 +-
> arch/s390/kernel/perf_regs.c | 2 +-
> arch/x86/events/core.c | 392 +++++++++++++++++++++++++-
> arch/x86/events/intel/core.c | 127 ++++++++-
> arch/x86/events/intel/ds.c | 195 ++++++++++---
> arch/x86/events/perf_event.h | 85 +++++-
> arch/x86/include/asm/fpu/sched.h | 5 +-
> arch/x86/include/asm/fpu/xstate.h | 3 +
> arch/x86/include/asm/msr-index.h | 7 +
> arch/x86/include/asm/perf_event.h | 38 ++-
> arch/x86/include/uapi/asm/perf_regs.h | 51 ++++
> arch/x86/kernel/fpu/core.c | 27 +-
> arch/x86/kernel/fpu/xstate.c | 25 +-
> arch/x86/kernel/perf_regs.c | 134 +++++++--
> include/linux/perf_event.h | 16 ++
> include/linux/perf_regs.h | 36 +--
> include/uapi/linux/perf_event.h | 50 +++-
> kernel/events/core.c | 138 +++++++--
> tools/perf/util/header.c | 3 +-
> 26 files changed, 1193 insertions(+), 199 deletions(-)
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data()
2026-03-24 0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
@ 2026-03-25 5:18 ` Mi, Dapeng
0 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 5:18 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang
On 3/24/2026 8:41 AM, Dapeng Mi wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The current perf/x86 implementation uses the generic functions
> perf_sample_regs_user() and perf_sample_regs_intr() to set up registers
> data for sampling records. While this approach works for general
> registers, it falls short when adding sampling support for SIMD and APX
> eGPRs registers on x86 platforms.
>
> To address this, we introduce the x86-specific function
> x86_pmu_setup_regs_data() for setting up register data on x86 platforms.
>
> At present, x86_pmu_setup_regs_data() mirrors the logic of the generic
> functions perf_sample_regs_user() and perf_sample_regs_intr().
> Subsequent patches will introduce x86-specific enhancements.
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
> arch/x86/events/core.c | 33 +++++++++++++++++++++++++++++++++
> arch/x86/events/intel/ds.c | 9 ++++++---
> arch/x86/events/perf_event.h | 4 ++++
> 3 files changed, 43 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index ad6cbc19592d..0a6c51e86e9b 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -1699,6 +1699,39 @@ static void x86_pmu_del(struct perf_event *event, int flags)
> static_call_cond(x86_pmu_del)(event);
> }
>
> +void x86_pmu_setup_regs_data(struct perf_event *event,
> + struct perf_sample_data *data,
> + struct pt_regs *regs)
> +{
> + struct perf_event_attr *attr = &event->attr;
> + u64 sample_type = attr->sample_type;
> +
> + if (sample_type & PERF_SAMPLE_REGS_USER) {
> + if (user_mode(regs)) {
> + data->regs_user.abi = perf_reg_abi(current);
> + data->regs_user.regs = regs;
> + } else if (!(current->flags & PF_KTHREAD)) {
Sashiko (AI agent) reviews this patchset. I would pick the reasonable
comment and paste here.
"
Is it safe to rely on !(current->flags & PF_KTHREAD) here?
Core perf code replaced this with the is_user_task() helper in commit
76ed27608f7d to prevent crashes. If a task is exiting (where task->mm
is cleared but PF_KTHREAD is not set) or if it is an io_uring thread
(which uses PF_USER_WORKER), could this pass the check and cause a NULL
pointer dereference or leak uninitialized kernel registers to user-space?
"
The comment looks reasonable and would fix it in next version.
> + perf_get_regs_user(&data->regs_user, regs);
> + } else {
> + data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
> + data->regs_user.regs = NULL;
> + }
> + data->dyn_size += sizeof(u64);
> + if (data->regs_user.regs)
> + data->dyn_size += hweight64(attr->sample_regs_user) * sizeof(u64);
> + data->sample_flags |= PERF_SAMPLE_REGS_USER;
> + }
> +
> + if (sample_type & PERF_SAMPLE_REGS_INTR) {
> + data->regs_intr.regs = regs;
> + data->regs_intr.abi = perf_reg_abi(current);
> + data->dyn_size += sizeof(u64);
> + if (data->regs_intr.regs)
> + data->dyn_size += hweight64(attr->sample_regs_intr) * sizeof(u64);
> + data->sample_flags |= PERF_SAMPLE_REGS_INTR;
> + }
> +}
> +
> int x86_pmu_handle_irq(struct pt_regs *regs)
> {
> struct perf_sample_data data;
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index 52eb6eac5df3..b045297c02d0 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -2450,6 +2450,7 @@ static inline void __setup_pebs_basic_group(struct perf_event *event,
> }
>
> static inline void __setup_pebs_gpr_group(struct perf_event *event,
> + struct perf_sample_data *data,
> struct pt_regs *regs,
> struct pebs_gprs *gprs,
> u64 sample_type)
> @@ -2459,8 +2460,10 @@ static inline void __setup_pebs_gpr_group(struct perf_event *event,
> regs->flags &= ~PERF_EFLAGS_EXACT;
> }
>
> - if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
> + if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
> adaptive_pebs_save_regs(regs, gprs);
> + x86_pmu_setup_regs_data(event, data, regs);
> + }
> }
>
> static inline void __setup_pebs_meminfo_group(struct perf_event *event,
> @@ -2553,7 +2556,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
> gprs = next_record;
> next_record = gprs + 1;
>
> - __setup_pebs_gpr_group(event, regs, gprs, sample_type);
> + __setup_pebs_gpr_group(event, data, regs, gprs, sample_type);
> }
>
> if (format_group & PEBS_DATACFG_MEMINFO) {
> @@ -2677,7 +2680,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
> gprs = next_record;
> next_record = gprs + 1;
>
> - __setup_pebs_gpr_group(event, regs,
> + __setup_pebs_gpr_group(event, data, regs,
> (struct pebs_gprs *)gprs,
> sample_type);
> }
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index fad87d3c8b2c..39c41947c70d 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -1306,6 +1306,10 @@ void x86_pmu_enable_event(struct perf_event *event);
>
> int x86_pmu_handle_irq(struct pt_regs *regs);
>
> +void x86_pmu_setup_regs_data(struct perf_event *event,
> + struct perf_sample_data *data,
> + struct pt_regs *regs);
> +
> void x86_pmu_show_pmu_cap(struct pmu *pmu);
>
> static inline int x86_pmu_num_counters(struct pmu *pmu)
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events
2026-03-24 0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
@ 2026-03-25 7:30 ` Mi, Dapeng
0 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 7:30 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang
On 3/24/2026 8:41 AM, Dapeng Mi wrote:
> Previously, XMM register sampling was only available for PEBS events
> starting from Icelake. Currently the support is now extended to non-PEBS
> events by utilizing the xsaves instruction, thereby completing the
> feature set.
>
> To implement this, a 64-byte aligned buffer is required. A per-CPU
> ext_regs_buf is introduced to store SIMD and other registers, with an
> approximate size of 2K. The buffer is allocated using kzalloc_node(),
> ensuring natural and 64-byte alignment for all kmalloc() allocations
> with powers of 2.
>
> XMM sampling for non-PEBS events is supported in the REGS_INTR case.
> Support for REGS_USER will be added in a subsequent patch. For PEBS
> events, XMM register sampling data is directly retrieved from PEBS
> records.
>
> Future support for additional vector registers (YMM/ZMM/OPMASK) is
> planned. An `ext_regs_mask` is added to track the supported vector
> register groups.
>
> Co-developed-by: Kan Liang <kan.liang@linux.intel.com>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
>
> V7: Optimize and simplify x86_pmu_sample_xregs(), etc. No functional
> change.
>
> arch/x86/events/core.c | 139 +++++++++++++++++++++++++++---
> arch/x86/events/intel/core.c | 31 ++++++-
> arch/x86/events/intel/ds.c | 20 +++--
> arch/x86/events/perf_event.h | 11 ++-
> arch/x86/include/asm/fpu/xstate.h | 2 +
> arch/x86/include/asm/perf_event.h | 5 +-
> arch/x86/kernel/fpu/xstate.c | 2 +-
> 7 files changed, 185 insertions(+), 25 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 0a6c51e86e9b..22965a8a22b3 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -410,6 +410,45 @@ set_ext_hw_attr(struct hw_perf_event *hwc, struct perf_event *event)
> return x86_pmu_extra_regs(val, event);
> }
>
> +static DEFINE_PER_CPU(struct xregs_state *, ext_regs_buf);
> +
> +static void release_ext_regs_buffers(void)
> +{
> + int cpu;
> +
> + if (!x86_pmu.ext_regs_mask)
> + return;
> +
> + for_each_possible_cpu(cpu) {
> + kfree(per_cpu(ext_regs_buf, cpu));
> + per_cpu(ext_regs_buf, cpu) = NULL;
> + }
> +}
> +
> +static void reserve_ext_regs_buffers(void)
> +{
> + bool compacted = cpu_feature_enabled(X86_FEATURE_XCOMPACTED);
> + unsigned int size;
> + int cpu;
> +
> + if (!x86_pmu.ext_regs_mask)
> + return;
> +
> + size = xstate_calculate_size(x86_pmu.ext_regs_mask, compacted);
> +
> + for_each_possible_cpu(cpu) {
> + per_cpu(ext_regs_buf, cpu) = kzalloc_node(size, GFP_KERNEL,
> + cpu_to_node(cpu));
Paste Sashiko (AI review agent)'s comments here.
"
Does kzalloc_node() guarantee the strict 64-byte alignment required by the
XSAVES instruction? If debugging options like CONFIG_KANSAN or
CONFIG_SLUB_DEBUG add redzone padding, could this shift the object offset
and trigger a #GP fault in NMI context?
"
Although kzalloc_node() (essentially kmalloc()) usually returns a
power-of-two aligned address. When the allocated size is larger than 64
bytes that the xstate size meets this requirement, the allocated memory
usually aligns at 64 bytes. But it's not a strict API guarantee.
I'm not quite sure if the CONFIG_KANSAN and CONFIG_SLUB_DEBUG would break
the alignment although the explanation for these items looks reasonable. I
enabled these 2 Kconfig items and test xsaves based sampling on NVL, but no
crash is found.
It may be lucky. Anyway, we need to ensure the xsave memory is 64 bytes
aligned and don't just depend on the internal implementation of
kzalloc_node(). It's some kind of risky. Would add a force alignment in
next version.
> + if (!per_cpu(ext_regs_buf, cpu))
> + goto err;
> + }
> +
> + return;
> +
> +err:
> + release_ext_regs_buffers();
> +}
> +
> int x86_reserve_hardware(void)
> {
> int err = 0;
> @@ -422,6 +461,7 @@ int x86_reserve_hardware(void)
> } else {
> reserve_ds_buffers();
> reserve_lbr_buffers();
> + reserve_ext_regs_buffers();
> }
> }
> if (!err)
> @@ -438,6 +478,7 @@ void x86_release_hardware(void)
> release_pmc_hardware();
> release_ds_buffers();
> release_lbr_buffers();
> + release_ext_regs_buffers();
> mutex_unlock(&pmc_reserve_mutex);
> }
> }
> @@ -655,18 +696,23 @@ int x86_pmu_hw_config(struct perf_event *event)
> return -EINVAL;
> }
>
> - /* sample_regs_user never support XMM registers */
> - if (unlikely(event->attr.sample_regs_user & PERF_REG_EXTENDED_MASK))
> - return -EINVAL;
> - /*
> - * Besides the general purpose registers, XMM registers may
> - * be collected in PEBS on some platforms, e.g. Icelake
> - */
> - if (unlikely(event->attr.sample_regs_intr & PERF_REG_EXTENDED_MASK)) {
> - if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
> - return -EINVAL;
> + if (event->attr.sample_type & PERF_SAMPLE_REGS_INTR) {
> + /*
> + * Besides the general purpose registers, XMM registers may
> + * be collected as well.
> + */
> + if (event_has_extended_regs(event)) {
> + if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
> + return -EINVAL;
> + }
> + }
>
> - if (!event->attr.precise_ip)
> + if (event->attr.sample_type & PERF_SAMPLE_REGS_USER) {
> + /*
> + * Currently XMM registers sampling for REGS_USER is not
> + * supported yet.
> + */
> + if (event_has_extended_regs(event))
> return -EINVAL;
> }
>
> @@ -1699,9 +1745,9 @@ static void x86_pmu_del(struct perf_event *event, int flags)
> static_call_cond(x86_pmu_del)(event);
> }
>
> -void x86_pmu_setup_regs_data(struct perf_event *event,
> - struct perf_sample_data *data,
> - struct pt_regs *regs)
> +static void x86_pmu_setup_gpregs_data(struct perf_event *event,
> + struct perf_sample_data *data,
> + struct pt_regs *regs)
> {
> struct perf_event_attr *attr = &event->attr;
> u64 sample_type = attr->sample_type;
> @@ -1732,6 +1778,71 @@ void x86_pmu_setup_regs_data(struct perf_event *event,
> }
> }
>
> +inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
> +{
> + struct x86_perf_regs *perf_regs = container_of(regs, struct x86_perf_regs, regs);
> +
> + perf_regs->xmm_regs = NULL;
> +}
> +
> +static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
> + struct xregs_state *xsave, u64 bitmap)
> +{
> + u64 mask;
> +
> + if (!xsave)
> + return;
> +
> + /* Filtered by what XSAVE really gives */
> + mask = bitmap & xsave->header.xfeatures;
> +
> + if (mask & XFEATURE_MASK_SSE)
> + perf_regs->xmm_space = xsave->i387.xmm_space;
> +}
> +
> +static void x86_pmu_sample_xregs(struct perf_event *event,
> + struct perf_sample_data *data,
> + u64 ignore_mask)
> +{
> + struct xregs_state *xsave = per_cpu(ext_regs_buf, smp_processor_id());
> + u64 sample_type = event->attr.sample_type;
> + struct x86_perf_regs *perf_regs;
> + u64 intr_mask = 0;
> + u64 mask = 0;
> +
> + if (WARN_ON_ONCE(!xsave))
> + return;
> +
> + if (event_has_extended_regs(event))
> + mask |= XFEATURE_MASK_SSE;
> +
> + mask &= x86_pmu.ext_regs_mask;
> +
> + if ((sample_type & PERF_SAMPLE_REGS_INTR) && data->regs_intr.abi)
> + intr_mask = mask & ~ignore_mask;
> +
> + if (intr_mask) {
> + perf_regs = container_of(data->regs_intr.regs,
> + struct x86_perf_regs, regs);
> + xsave->header.xfeatures = 0;
> + xsaves_nmi(xsave, mask);
> + x86_pmu_update_xregs(perf_regs, xsave, intr_mask);
> + }
> +}
> +
> +void x86_pmu_setup_regs_data(struct perf_event *event,
> + struct perf_sample_data *data,
> + struct pt_regs *regs,
> + u64 ignore_mask)
> +{
> + x86_pmu_setup_gpregs_data(event, data, regs);
> + /*
> + * ignore_mask indicates the PEBS sampled extended regs
> + * which are unnecessary to sample again.
> + */
> + x86_pmu_sample_xregs(event, data, ignore_mask);
> +}
> +
> int x86_pmu_handle_irq(struct pt_regs *regs)
> {
> struct perf_sample_data data;
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 5a2b1503b6a5..5772dcc3bcbd 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3649,6 +3649,9 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
> if (has_branch_stack(event))
> intel_pmu_lbr_save_brstack(&data, cpuc, event);
>
> + x86_pmu_clear_perf_regs(regs);
> + x86_pmu_setup_regs_data(event, &data, regs, 0);
> +
> perf_event_overflow(event, &data, regs);
> }
>
> @@ -5884,8 +5887,32 @@ static inline void __intel_update_large_pebs_flags(struct pmu *pmu)
> }
> }
>
> -#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX_FIXED))
> +static void intel_extended_regs_init(struct pmu *pmu)
> +{
> + struct pmu *dest_pmu = pmu ? pmu : x86_get_pmu(smp_processor_id());
> +
> + /*
> + * Extend the vector registers support to non-PEBS.
> + * The feature is limited to newer Intel machines with
> + * PEBS V4+ or archPerfmonExt (0x23) enabled for now.
> + * In theory, the vector registers can be retrieved as
> + * long as the CPU supports. The support for the old
> + * generations may be added later if there is a
> + * requirement.
> + * Only support the extension when XSAVES is available.
> + */
> + if (!boot_cpu_has(X86_FEATURE_XSAVES))
> + return;
> +
> + if (!boot_cpu_has(X86_FEATURE_XMM) ||
> + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
> + return;
>
> + x86_pmu.ext_regs_mask |= XFEATURE_MASK_SSE;
> + dest_pmu->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
> +}
> +
> +#define counter_mask(_gp, _fixed) ((_gp) | ((u64)(_fixed) << INTEL_PMC_IDX_FIXED))
> static void update_pmu_cap(struct pmu *pmu)
> {
> unsigned int eax, ebx, ecx, edx;
> @@ -5949,6 +5976,8 @@ static void update_pmu_cap(struct pmu *pmu)
> /* Perf Metric (Bit 15) and PEBS via PT (Bit 16) are hybrid enumeration */
> rdmsrq(MSR_IA32_PERF_CAPABILITIES, hybrid(pmu, intel_cap).capabilities);
> }
> +
> + intel_extended_regs_init(pmu);
> }
>
> static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index b045297c02d0..74a41dae8a62 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -1743,8 +1743,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
> if (gprs || (attr->precise_ip < 2) || tsx_weight)
> pebs_data_cfg |= PEBS_DATACFG_GP;
>
> - if ((sample_type & PERF_SAMPLE_REGS_INTR) &&
> - (attr->sample_regs_intr & PERF_REG_EXTENDED_MASK))
> + if (event_has_extended_regs(event))
> pebs_data_cfg |= PEBS_DATACFG_XMMS;
>
> if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
> @@ -2460,10 +2459,8 @@ static inline void __setup_pebs_gpr_group(struct perf_event *event,
> regs->flags &= ~PERF_EFLAGS_EXACT;
> }
>
> - if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
> + if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
> adaptive_pebs_save_regs(regs, gprs);
> - x86_pmu_setup_regs_data(event, data, regs);
> - }
> }
>
> static inline void __setup_pebs_meminfo_group(struct perf_event *event,
> @@ -2521,6 +2518,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
> struct pebs_meminfo *meminfo = NULL;
> struct pebs_gprs *gprs = NULL;
> struct x86_perf_regs *perf_regs;
> + u64 ignore_mask = 0;
> u64 format_group;
> u16 retire;
>
> @@ -2528,7 +2526,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
> return;
>
> perf_regs = container_of(regs, struct x86_perf_regs, regs);
> - perf_regs->xmm_regs = NULL;
> + x86_pmu_clear_perf_regs(regs);
>
> format_group = basic->format_group;
>
> @@ -2575,6 +2573,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
> if (format_group & PEBS_DATACFG_XMMS) {
> struct pebs_xmm *xmm = next_record;
>
> + ignore_mask |= XFEATURE_MASK_SSE;
> next_record = xmm + 1;
> perf_regs->xmm_regs = xmm->xmm;
> }
> @@ -2613,6 +2612,8 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
> next_record += nr * sizeof(u64);
> }
>
> + x86_pmu_setup_regs_data(event, data, regs, ignore_mask);
> +
> WARN_ONCE(next_record != __pebs + basic->format_size,
> "PEBS record size %u, expected %llu, config %llx\n",
> basic->format_size,
> @@ -2638,6 +2639,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
> struct arch_pebs_aux *meminfo = NULL;
> struct arch_pebs_gprs *gprs = NULL;
> struct x86_perf_regs *perf_regs;
> + u64 ignore_mask = 0;
> void *next_record;
> void *at = __pebs;
>
> @@ -2645,7 +2647,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
> return;
>
> perf_regs = container_of(regs, struct x86_perf_regs, regs);
> - perf_regs->xmm_regs = NULL;
> + x86_pmu_clear_perf_regs(regs);
>
> __setup_perf_sample_data(event, iregs, data);
>
> @@ -2700,6 +2702,7 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
>
> next_record += sizeof(struct arch_pebs_xer_header);
>
> + ignore_mask |= XFEATURE_MASK_SSE;
> xmm = next_record;
> perf_regs->xmm_regs = xmm->xmm;
> next_record = xmm + 1;
> @@ -2747,6 +2750,8 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
> at = at + header->size;
> goto again;
> }
> +
> + x86_pmu_setup_regs_data(event, data, regs, ignore_mask);
> }
>
> static inline void *
> @@ -3409,6 +3414,7 @@ static void __init intel_ds_pebs_init(void)
> x86_pmu.flags |= PMU_FL_PEBS_ALL;
> x86_pmu.pebs_capable = ~0ULL;
> pebs_qual = "-baseline";
> + x86_pmu.ext_regs_mask |= XFEATURE_MASK_SSE;
> x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
> } else {
> /* Only basic record supported */
Sashiko complains
"
Is it safe to unconditionally set ext_regs_mask |= XFEATURE_MASK_SSE and
PERF_PMU_CAP_EXTENDED_REGS here if the CPU doesn't support XSAVES? If a
user boots with noxsaves or a hypervisor hides it, could a non-PEBS event
requesting extended registers trigger an Invalid Opcode (#UD) exception
when the NMI handler later executes the XSAVES instruction?
"
Hmm, it looks reasonable, especially in guest environment. Would check if
xsaves is supported here.
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index 39c41947c70d..a5e5bffb711e 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -1020,6 +1020,12 @@ struct x86_pmu {
> struct extra_reg *extra_regs;
> unsigned int flags;
>
> + /*
> + * Extended regs, e.g., vector registers
> + * Utilize the same format as the XFEATURE_MASK_*
> + */
> + u64 ext_regs_mask;
> +
> /*
> * Intel host/guest support (KVM)
> */
> @@ -1306,9 +1312,12 @@ void x86_pmu_enable_event(struct perf_event *event);
>
> int x86_pmu_handle_irq(struct pt_regs *regs);
>
> +void x86_pmu_clear_perf_regs(struct pt_regs *regs);
> +
> void x86_pmu_setup_regs_data(struct perf_event *event,
> struct perf_sample_data *data,
> - struct pt_regs *regs);
> + struct pt_regs *regs,
> + u64 ignore_mask);
>
> void x86_pmu_show_pmu_cap(struct pmu *pmu);
>
> diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h
> index 38fa8ff26559..19dec5f0b1c7 100644
> --- a/arch/x86/include/asm/fpu/xstate.h
> +++ b/arch/x86/include/asm/fpu/xstate.h
> @@ -112,6 +112,8 @@ void xsaves(struct xregs_state *xsave, u64 mask);
> void xrstors(struct xregs_state *xsave, u64 mask);
> void xsaves_nmi(struct xregs_state *xsave, u64 mask);
>
> +unsigned int xstate_calculate_size(u64 xfeatures, bool compacted);
> +
> int xfd_enable_feature(u64 xfd_err);
>
> #ifdef CONFIG_X86_64
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index 752cb319d5ea..e47a963a7cf0 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -726,7 +726,10 @@ extern void perf_events_lapic_init(void);
> struct pt_regs;
> struct x86_perf_regs {
> struct pt_regs regs;
> - u64 *xmm_regs;
> + union {
> + u64 *xmm_regs;
> + u32 *xmm_space; /* for xsaves */
> + };
> };
>
> extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
> index 39e5f9e79a4c..93631f7a638e 100644
> --- a/arch/x86/kernel/fpu/xstate.c
> +++ b/arch/x86/kernel/fpu/xstate.c
> @@ -587,7 +587,7 @@ static bool __init check_xstate_against_struct(int nr)
> return true;
> }
>
> -static unsigned int xstate_calculate_size(u64 xfeatures, bool compacted)
> +unsigned int xstate_calculate_size(u64 xfeatures, bool compacted)
> {
> unsigned int topmost = fls64(xfeatures) - 1;
> unsigned int offset, i;
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case
2026-03-24 0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
@ 2026-03-25 7:58 ` Mi, Dapeng
0 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 7:58 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang
On 3/24/2026 8:41 AM, Dapeng Mi wrote:
> This patch adds support for XMM register sampling in the REGS_USER case.
>
> To handle simultaneous sampling of XMM registers for both REGS_INTR and
> REGS_USER cases, a per-CPU `x86_user_regs` is introduced to store
> REGS_USER-specific XMM registers. This prevents REGS_USER-specific XMM
> register data from being overwritten by REGS_INTR-specific data if they
> share the same `x86_perf_regs` structure.
>
> To sample user-space XMM registers, the `x86_pmu_update_user_ext_regs()`
> helper function is added. It checks if the `TIF_NEED_FPU_LOAD` flag is
> set. If so, the user-space XMM register data can be directly retrieved
> from the cached task FPU state, as the corresponding hardware registers
> have been cleared or switched to kernel-space data. Otherwise, the data
> must be read from the hardware registers using the `xsaves` instruction.
>
> For PEBS events, `x86_pmu_update_user_ext_regs()` checks if the
> PEBS-sampled XMM register data belongs to user-space. If so, no further
> action is needed. Otherwise, the user-space XMM register data needs to be
> re-sampled using the same method as for non-PEBS events.
>
> Co-developed-by: Kan Liang <kan.liang@linux.intel.com>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
> arch/x86/events/core.c | 95 ++++++++++++++++++++++++++++++++++++------
> 1 file changed, 82 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 22965a8a22b3..a5643c875190 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -696,7 +696,7 @@ int x86_pmu_hw_config(struct perf_event *event)
> return -EINVAL;
> }
>
> - if (event->attr.sample_type & PERF_SAMPLE_REGS_INTR) {
> + if (event->attr.sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) {
> /*
> * Besides the general purpose registers, XMM registers may
> * be collected as well.
> @@ -707,15 +707,6 @@ int x86_pmu_hw_config(struct perf_event *event)
> }
> }
>
> - if (event->attr.sample_type & PERF_SAMPLE_REGS_USER) {
> - /*
> - * Currently XMM registers sampling for REGS_USER is not
> - * supported yet.
> - */
> - if (event_has_extended_regs(event))
> - return -EINVAL;
> - }
> -
> return x86_setup_perfctr(event);
> }
Sashiko comments
"
With this check removed, can older platforms hit an uninitialized stack
pointer dereference?
In __intel_pmu_pebs_events(), a struct x86_perf_regs is allocated on the
stack without zero-initialization. setup_pebs_fixed_sample_data() populates
perf_regs.regs but leaves perf_regs.xmm_regs uninitialized and doesn't set
the PERF_SAMPLE_REGS_USER flag.
If perf_prepare_sample() sees the missing flag, it calls
perf_sample_regs_user(), and perf_output_sample_regs() eventually
dereferences the uninitialized stack memory at perf_regs->xmm_regs to
output XMM data.
"
The comment makes sense. Would clear the xmm_regs and other pointers in
next version.
>
> @@ -1745,6 +1736,28 @@ static void x86_pmu_del(struct perf_event *event, int flags)
> static_call_cond(x86_pmu_del)(event);
> }
>
> +/*
> + * When both PERF_SAMPLE_REGS_INTR and PERF_SAMPLE_REGS_USER are set,
> + * an additional x86_perf_regs is required to save user-space registers.
> + * Without this, user-space register data may be overwritten by kernel-space
> + * registers.
> + */
> +static DEFINE_PER_CPU(struct x86_perf_regs, x86_user_regs);
> +static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data,
> + struct pt_regs *regs)
> +{
> + struct x86_perf_regs *x86_regs_user = this_cpu_ptr(&x86_user_regs);
> + struct perf_regs regs_user;
> +
> + perf_get_regs_user(®s_user, regs);
> + data->regs_user.abi = regs_user.abi;
> + if (regs_user.regs) {
> + x86_regs_user->regs = *regs_user.regs;
> + data->regs_user.regs = &x86_regs_user->regs;
> + } else
> + data->regs_user.regs = NULL;
> +}
Sashiko comments
"
Does this leave the xmm_regs pointer uninitialized across perf events?
x86_user_regs is a per-CPU variable. When x86_pmu_perf_get_regs_user()
copies the general-purpose registers, it doesn't clear or initialize the
xmm_regs pointer in the union.
If user_mask evaluates to 0 later, x86_pmu_update_xregs() is bypassed,
leaving a stale xmm_regs pointer from a previous perf event on the same CPU.
Could this lead to a use-after-free or information leak when
perf_output_sample_regs() dereferences it?
"
It makes sense. would clear xmm_regs and other pointers in next version.
> +
> static void x86_pmu_setup_gpregs_data(struct perf_event *event,
> struct perf_sample_data *data,
> struct pt_regs *regs)
> @@ -1757,7 +1770,14 @@ static void x86_pmu_setup_gpregs_data(struct perf_event *event,
> data->regs_user.abi = perf_reg_abi(current);
> data->regs_user.regs = regs;
> } else if (!(current->flags & PF_KTHREAD)) {
> - perf_get_regs_user(&data->regs_user, regs);
> + /*
> + * It cannot guarantee that the kernel will never
> + * touch the registers outside of the pt_regs,
> + * especially when more and more registers
> + * (e.g., SIMD, eGPR) are added. The live data
> + * cannot be used.
> + */
> + x86_pmu_perf_get_regs_user(data, regs);
> } else {
> data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
> data->regs_user.regs = NULL;
> @@ -1800,6 +1820,43 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
> perf_regs->xmm_space = xsave->i387.xmm_space;
> }
>
> +/*
> + * This function retrieves cached user-space fpu registers (XMM/YMM/ZMM).
> + * If TIF_NEED_FPU_LOAD is set, it indicates that the user-space FPU state
> + * is cached. Otherwise, the data should be read directly from the hardware
> + * registers.
> + */
> +static inline u64 x86_pmu_update_user_xregs(struct perf_sample_data *data,
> + u64 mask, u64 ignore_mask)
> +{
> + struct x86_perf_regs *perf_regs;
> + struct xregs_state *xsave;
> + struct fpu *fpu;
> + struct fpstate *fps;
> +
> + if (data->regs_user.abi == PERF_SAMPLE_REGS_ABI_NONE)
> + return 0;
> +
> + if (test_thread_flag(TIF_NEED_FPU_LOAD)) {
> + perf_regs = container_of(data->regs_user.regs,
> + struct x86_perf_regs, regs);
> + fpu = x86_task_fpu(current);
> + /*
> + * If __task_fpstate is set, it holds the right pointer,
> + * otherwise fpstate will.
> + */
> + fps = READ_ONCE(fpu->__task_fpstate);
> + if (!fps)
> + fps = fpu->fpstate;
> + xsave = &fps->regs.xsave;
> +
> + x86_pmu_update_xregs(perf_regs, xsave, mask);
> + return 0;
> + }
> +
> + return mask & ~ignore_mask;
> +}
Sashiko comments
"
If a PEBS event triggers in kernel space and TIF_NEED_FPU_LOAD is false,
PEBS has captured the kernel XMM registers, and the drain code sets
ignore_mask = XFEATURE_MASK_SSE.
When requesting user XMM registers, user_mask evaluates to 0 here because
of ignore_mask. Does this incorrectly suppress the xsaves_nmi fallback,
silently dropping the live user XMM registers?
This also seems to leave the xmm_regs pointer uninitialized, leading to
the use-after-free mentioned above.
"
It makes sense as well. Need to clear ignore_mask for user space sampling
if the PEBS sampling hits kernel space instead of user space. Thanks.
> +
> static void x86_pmu_sample_xregs(struct perf_event *event,
> struct perf_sample_data *data,
> u64 ignore_mask)
> @@ -1807,6 +1864,7 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
> struct xregs_state *xsave = per_cpu(ext_regs_buf, smp_processor_id());
> u64 sample_type = event->attr.sample_type;
> struct x86_perf_regs *perf_regs;
> + u64 user_mask = 0;
> u64 intr_mask = 0;
> u64 mask = 0;
>
> @@ -1817,15 +1875,26 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
> mask |= XFEATURE_MASK_SSE;
>
> mask &= x86_pmu.ext_regs_mask;
> + if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
> + user_mask = x86_pmu_update_user_xregs(data, mask, ignore_mask);
>
> if ((sample_type & PERF_SAMPLE_REGS_INTR) && data->regs_intr.abi)
> intr_mask = mask & ~ignore_mask;
>
> + if (user_mask | intr_mask) {
> + xsave->header.xfeatures = 0;
> + xsaves_nmi(xsave, user_mask | intr_mask);
> + }
> +
> + if (user_mask) {
> + perf_regs = container_of(data->regs_user.regs,
> + struct x86_perf_regs, regs);
> + x86_pmu_update_xregs(perf_regs, xsave, user_mask);
> + }
> +
> if (intr_mask) {
> perf_regs = container_of(data->regs_intr.regs,
> struct x86_perf_regs, regs);
> - xsave->header.xfeatures = 0;
> - xsaves_nmi(xsave, mask);
> x86_pmu_update_xregs(perf_regs, xsave, intr_mask);
> }
> }
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 13/24] perf: Add sampling support for SIMD registers
2026-03-24 0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
@ 2026-03-25 8:44 ` Mi, Dapeng
0 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 8:44 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang
On 3/24/2026 8:41 AM, Dapeng Mi wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
>
> Users may be interested in sampling SIMD registers during profiling.
> The current sample_regs_* structure does not have sufficient space
> for all SIMD registers.
>
> To address this, new attribute fields sample_simd_{pred,vec}_reg_* are
> added to struct perf_event_attr to represent the SIMD registers that are
> expected to be sampled.
>
> Currently, the perf/x86 code supports XMM registers in sample_regs_*.
> To unify the configuration of SIMD registers and ensure a consistent
> method for configuring XMM and other SIMD registers, a new event
> attribute field, sample_simd_regs_enabled, is introduced. When
> sample_simd_regs_enabled is set, it indicates that all SIMD registers,
> including XMM, will be represented by the newly introduced
> sample_simd_{pred|vec}_reg_* fields. The original XMM space in
> sample_regs_* is reserved for future uses.
>
> Since SIMD registers are wider than 64 bits, a new output format is
> introduced. The number and width of SIMD registers are dumped first,
> followed by the register values. The number and width are based on the
> user's configuration. If they differ (e.g., on ARM), an ARCH-specific
> perf_output_sample_simd_regs function can be implemented separately.
>
> A new ABI, PERF_SAMPLE_REGS_ABI_SIMD, is added to indicate the new format.
> The enum perf_sample_regs_abi is now a bitmap. This change should not
> impact existing tools, as the version and bitmap remain the same for
> values 1 and 2.
>
> Additionally, two new __weak functions are introduced:
> - perf_simd_reg_value(): Retrieves the value of the requested SIMD
> register.
> - perf_simd_reg_validate(): Validates the configuration of the SIMD
> registers.
>
> A new flag, PERF_PMU_CAP_SIMD_REGS, is added to indicate that the PMU
> supports SIMD register dumping. An error is generated if
> sample_simd_{pred|vec}_reg_* is mistakenly set for a PMU that does not
> support this capability.
>
> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
>
> V7: Add macro word_for_each_set_bit() to simplify u64 set-bit iteration.
>
> include/linux/perf_event.h | 8 +++
> include/linux/perf_regs.h | 4 ++
> include/uapi/linux/perf_event.h | 50 ++++++++++++++--
> kernel/events/core.c | 102 +++++++++++++++++++++++++++++---
> tools/perf/util/header.c | 3 +-
> 5 files changed, 153 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index e8b0d8e2d2af..137d6e4a3403 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -306,6 +306,7 @@ struct perf_event_pmu_context;
> #define PERF_PMU_CAP_AUX_PAUSE 0x0200
> #define PERF_PMU_CAP_AUX_PREFER_LARGE 0x0400
> #define PERF_PMU_CAP_MEDIATED_VPMU 0x0800
> +#define PERF_PMU_CAP_SIMD_REGS 0x1000
>
> /**
> * pmu::scope
> @@ -1534,6 +1535,13 @@ perf_event__output_id_sample(struct perf_event *event,
> extern void
> perf_log_lost_samples(struct perf_event *event, u64 lost);
>
> +static inline bool event_has_simd_regs(struct perf_event *event)
> +{
> + struct perf_event_attr *attr = &event->attr;
> +
> + return attr->sample_simd_regs_enabled != 0;
> +}
> +
> static inline bool event_has_extended_regs(struct perf_event *event)
> {
> struct perf_event_attr *attr = &event->attr;
> diff --git a/include/linux/perf_regs.h b/include/linux/perf_regs.h
> index 144bcc3ff19f..518f28c6a7d4 100644
> --- a/include/linux/perf_regs.h
> +++ b/include/linux/perf_regs.h
> @@ -14,6 +14,10 @@ int perf_reg_validate(u64 mask);
> u64 perf_reg_abi(struct task_struct *task);
> void perf_get_regs_user(struct perf_regs *regs_user,
> struct pt_regs *regs);
> +int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
> + u16 pred_qwords, u32 pred_mask);
> +u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
> + u16 qwords_idx, bool pred);
>
> #ifdef CONFIG_HAVE_PERF_REGS
> #include <asm/perf_regs.h>
> diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> index fd10aa8d697f..b8c8953928f8 100644
> --- a/include/uapi/linux/perf_event.h
> +++ b/include/uapi/linux/perf_event.h
> @@ -314,8 +314,9 @@ enum {
> */
> enum perf_sample_regs_abi {
> PERF_SAMPLE_REGS_ABI_NONE = 0,
> - PERF_SAMPLE_REGS_ABI_32 = 1,
> - PERF_SAMPLE_REGS_ABI_64 = 2,
> + PERF_SAMPLE_REGS_ABI_32 = (1 << 0),
> + PERF_SAMPLE_REGS_ABI_64 = (1 << 1),
> + PERF_SAMPLE_REGS_ABI_SIMD = (1 << 2),
> };
>
> /*
> @@ -383,6 +384,7 @@ enum perf_event_read_format {
> #define PERF_ATTR_SIZE_VER7 128 /* Add: sig_data */
> #define PERF_ATTR_SIZE_VER8 136 /* Add: config3 */
> #define PERF_ATTR_SIZE_VER9 144 /* add: config4 */
> +#define PERF_ATTR_SIZE_VER10 176 /* Add: sample_simd_{pred,vec}_reg_* */
>
> /*
> * 'struct perf_event_attr' contains various attributes that define
> @@ -547,6 +549,30 @@ struct perf_event_attr {
>
> __u64 config3; /* extension of config2 */
> __u64 config4; /* extension of config3 */
> +
> + /*
> + * Defines the sampling SIMD/PRED registers bitmap and qwords
> + * (8 bytes) length.
> + *
> + * sample_simd_regs_enabled != 0 indicates there are SIMD/PRED registers
> + * to be sampled, the SIMD/PRED registers bitmap and qwords length are
> + * represented in sample_{simd|pred}_pred_reg_{intr|user} and
> + * sample_simd_{vec|pred}_reg_qwords fields.
> + *
> + * sample_simd_regs_enabled == 0 indicates no SIMD/PRED registers are
> + * sampled.
> + */
> + union {
> + __u16 sample_simd_regs_enabled;
> + __u16 sample_simd_pred_reg_qwords;
> + };
> + __u16 sample_simd_vec_reg_qwords;
> + __u32 __reserved_4;
Sashiko comments
"
Since __reserved_4 is newly introduced to maintain alignment, should there be
validation in perf_copy_attr() in kernel/events/core.c to ensure it is
strictly zeroed out?
Without validation to reject non-zero values, userspace applications might
pass uninitialized memory here, potentially preventing the kernel from safely
repurposing __reserved_4 in the future without breaking existing binaries.
"
It's correct. Just updating the perf_attr_check(), but forget the
perf_copy_attr().
> +
> + __u32 sample_simd_pred_reg_intr;
> + __u32 sample_simd_pred_reg_user;
> + __u64 sample_simd_vec_reg_intr;
> + __u64 sample_simd_vec_reg_user;
> };
>
> /*
> @@ -1020,7 +1046,15 @@ enum perf_event_type {
> * } && PERF_SAMPLE_BRANCH_STACK
> *
> * { u64 abi; # enum perf_sample_regs_abi
> - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
> + * u64 regs[weight(mask)];
> + * struct {
> + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_user)
> + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
> + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_user)
> + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
> + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
> + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + * } && PERF_SAMPLE_REGS_USER
> *
> * { u64 size;
> * char data[size];
> @@ -1047,7 +1081,15 @@ enum perf_event_type {
> * { u64 data_src; } && PERF_SAMPLE_DATA_SRC
> * { u64 transaction; } && PERF_SAMPLE_TRANSACTION
> * { u64 abi; # enum perf_sample_regs_abi
> - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
> + * u64 regs[weight(mask)];
> + * struct {
> + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_intr)
> + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
> + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_intr)
> + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
> + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
> + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + * } && PERF_SAMPLE_REGS_INTR
> * { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
> * { u64 cgroup;} && PERF_SAMPLE_CGROUP
> * { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 7558bc5b1e73..de42575f517b 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -7753,22 +7753,60 @@ void __weak perf_get_regs_user(struct perf_regs *regs_user,
> regs_user->abi = perf_reg_abi(current);
> }
>
> +#define word_for_each_set_bit(bit, val) \
> + for (unsigned long long __v = (val); \
> + __v && ((bit = __builtin_ctzll(__v)), 1); \
> + __v &= __v - 1)
> +
> static void
> perf_output_sample_regs(struct perf_output_handle *handle,
> struct pt_regs *regs, u64 mask)
> {
> int bit;
> - DECLARE_BITMAP(_mask, 64);
> -
> - bitmap_from_u64(_mask, mask);
> - for_each_set_bit(bit, _mask, sizeof(mask) * BITS_PER_BYTE) {
> - u64 val;
>
> - val = perf_reg_value(regs, bit);
> + word_for_each_set_bit(bit, mask) {
> + u64 val = perf_reg_value(regs, bit);
> perf_output_put(handle, val);
> }
> }
>
> +static void
> +perf_output_sample_simd_regs(struct perf_output_handle *handle,
> + struct perf_event *event,
> + struct pt_regs *regs,
> + u64 mask, u32 pred_mask)
Sashiko comments
"
Does the static declaration of perf_output_sample_simd_regs() prevent
architectures from overriding it as described in the commit message?
Additionally, can the generic perf_prepare_sample() function reserve enough
space for this new payload?
"
Yes, the commit message needs to be enhanced and remove the description
"If they differ (e.g., on ARM), an ARCH-specific
perf_output_sample_simd_regs function can be implemented separately."
Currently the space for the new payload is reserved in
x86_pmu_setup_regs_data(), but it's indeed not a good place, would move it
into perf_prepare_sample(). Thanks.
> +{
> + u16 pred_qwords = event->attr.sample_simd_pred_reg_qwords;
> + u16 vec_qwords = event->attr.sample_simd_vec_reg_qwords;
> + u16 nr_vectors = hweight64(mask);
> + u16 nr_pred = hweight32(pred_mask);
> + int bit;
> +
> + perf_output_put(handle, nr_vectors);
> + perf_output_put(handle, vec_qwords);
> + perf_output_put(handle, nr_pred);
> + perf_output_put(handle, pred_qwords);
> +
> + if (nr_vectors) {
> + word_for_each_set_bit(bit, mask) {
> + for (int i = 0; i < vec_qwords; i++) {
> + u64 val = perf_simd_reg_value(regs, bit,
> + i, false);
> + perf_output_put(handle, val);
> + }
> + }
> + }
> + if (nr_pred) {
> + word_for_each_set_bit(bit, pred_mask) {
> + for (int i = 0; i < pred_qwords; i++) {
> + u64 val = perf_simd_reg_value(regs, bit,
> + i, true);
> + perf_output_put(handle, val);
> + }
> + }
> + }
> +}
> +
> static void perf_sample_regs_user(struct perf_regs *regs_user,
> struct pt_regs *regs)
> {
> @@ -7790,6 +7828,17 @@ static void perf_sample_regs_intr(struct perf_regs *regs_intr,
> regs_intr->abi = perf_reg_abi(current);
> }
>
> +int __weak perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
> + u16 pred_qwords, u32 pred_mask)
> +{
> + return vec_qwords || vec_mask || pred_qwords || pred_mask ? -ENOSYS : 0;
> +}
> +
> +u64 __weak perf_simd_reg_value(struct pt_regs *regs, int idx,
> + u16 qwords_idx, bool pred)
> +{
> + return 0;
> +}
>
> /*
> * Get remaining task size from user stack pointer.
> @@ -8320,10 +8369,17 @@ void perf_output_sample(struct perf_output_handle *handle,
> perf_output_put(handle, abi);
>
> if (abi) {
> - u64 mask = event->attr.sample_regs_user;
> + struct perf_event_attr *attr = &event->attr;
> + u64 mask = attr->sample_regs_user;
> perf_output_sample_regs(handle,
> data->regs_user.regs,
> mask);
> + if (abi & PERF_SAMPLE_REGS_ABI_SIMD) {
> + perf_output_sample_simd_regs(handle, event,
> + data->regs_user.regs,
> + attr->sample_simd_vec_reg_user,
> + attr->sample_simd_pred_reg_user);
> + }
> }
> }
>
> @@ -8351,11 +8407,18 @@ void perf_output_sample(struct perf_output_handle *handle,
> perf_output_put(handle, abi);
>
> if (abi) {
> - u64 mask = event->attr.sample_regs_intr;
> + struct perf_event_attr *attr = &event->attr;
> + u64 mask = attr->sample_regs_intr;
>
> perf_output_sample_regs(handle,
> data->regs_intr.regs,
> mask);
> + if (abi & PERF_SAMPLE_REGS_ABI_SIMD) {
> + perf_output_sample_simd_regs(handle, event,
> + data->regs_intr.regs,
> + attr->sample_simd_vec_reg_intr,
> + attr->sample_simd_pred_reg_intr);
> + }
> }
> }
>
> @@ -13011,6 +13074,12 @@ static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
> if (ret)
> goto err_pmu;
>
> + if (!(pmu->capabilities & PERF_PMU_CAP_SIMD_REGS) &&
> + event_has_simd_regs(event)) {
> + ret = -EOPNOTSUPP;
> + goto err_destroy;
> + }
> +
> if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) &&
> event_has_extended_regs(event)) {
> ret = -EOPNOTSUPP;
> @@ -13556,6 +13625,12 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
> ret = perf_reg_validate(attr->sample_regs_user);
> if (ret)
> return ret;
> + ret = perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords,
> + attr->sample_simd_vec_reg_user,
> + attr->sample_simd_pred_reg_qwords,
> + attr->sample_simd_pred_reg_user);
> + if (ret)
> + return ret;
> }
>
> if (attr->sample_type & PERF_SAMPLE_STACK_USER) {
> @@ -13576,8 +13651,17 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr,
> if (!attr->sample_max_stack)
> attr->sample_max_stack = sysctl_perf_event_max_stack;
>
> - if (attr->sample_type & PERF_SAMPLE_REGS_INTR)
> + if (attr->sample_type & PERF_SAMPLE_REGS_INTR) {
> ret = perf_reg_validate(attr->sample_regs_intr);
> + if (ret)
> + return ret;
> + ret = perf_simd_reg_validate(attr->sample_simd_vec_reg_qwords,
> + attr->sample_simd_vec_reg_intr,
> + attr->sample_simd_pred_reg_qwords,
> + attr->sample_simd_pred_reg_intr);
> + if (ret)
> + return ret;
> + }
>
> #ifndef CONFIG_CGROUP_PERF
> if (attr->sample_type & PERF_SAMPLE_CGROUP)
> diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
> index 9142a8ba4019..f84200b9dd57 100644
> --- a/tools/perf/util/header.c
> +++ b/tools/perf/util/header.c
> @@ -2051,7 +2051,8 @@ static void free_event_desc(struct evsel *events)
>
> static bool perf_attr_check(struct perf_event_attr *attr)
> {
> - if (attr->__reserved_1 || attr->__reserved_2 || attr->__reserved_3) {
> + if (attr->__reserved_1 || attr->__reserved_2 ||
> + attr->__reserved_3 || attr->__reserved_4) {
> pr_warning("Reserved bits are set unexpectedly. "
> "Please update perf tool.\n");
> return false;
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields
2026-03-24 0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
@ 2026-03-25 9:01 ` Mi, Dapeng
0 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 9:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang
On 3/24/2026 8:41 AM, Dapeng Mi wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
>
> This patch adds support for sampling XMM registers using the
> sample_simd_vec_reg_* fields.
>
> When sample_simd_regs_enabled is set, the original XMM space in the
> sample_regs_* field is treated as reserved. An INVAL error will be
> reported to user space if any bit is set in the original XMM space while
> sample_simd_regs_enabled is set.
>
> The perf_reg_value function requires ABI information to understand the
> layout of sample_regs. To accommodate this, a new abi field is introduced
> in the struct x86_perf_regs to represent ABI information.
>
> Additionally, the X86-specific perf_simd_reg_value function is implemented
> to retrieve the XMM register values.
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
> arch/x86/events/core.c | 89 +++++++++++++++++++++++++--
> arch/x86/events/intel/ds.c | 2 +-
> arch/x86/events/perf_event.h | 12 ++++
> arch/x86/include/asm/perf_event.h | 1 +
> arch/x86/include/uapi/asm/perf_regs.h | 13 ++++
> arch/x86/kernel/perf_regs.c | 51 ++++++++++++++-
> 6 files changed, 161 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index a5643c875190..3c9b79b46a66 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -704,6 +704,22 @@ int x86_pmu_hw_config(struct perf_event *event)
> if (event_has_extended_regs(event)) {
> if (!(event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS))
> return -EINVAL;
> + if (event->attr.sample_simd_regs_enabled)
> + return -EINVAL;
> + }
> +
> + if (event_has_simd_regs(event)) {
> + if (!(event->pmu->capabilities & PERF_PMU_CAP_SIMD_REGS))
> + return -EINVAL;
> + /* Not require any vector registers but set width */
> + if (event->attr.sample_simd_vec_reg_qwords &&
> + !event->attr.sample_simd_vec_reg_intr &&
> + !event->attr.sample_simd_vec_reg_user)
> + return -EINVAL;
> + /* The vector registers set is not supported */
> + if (event_needs_xmm(event) &&
> + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_SSE))
> + return -EINVAL;
> }
> }
>
> @@ -1749,6 +1765,7 @@ static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data,
> struct x86_perf_regs *x86_regs_user = this_cpu_ptr(&x86_user_regs);
> struct perf_regs regs_user;
>
> + x86_regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
> perf_get_regs_user(®s_user, regs);
> data->regs_user.abi = regs_user.abi;
> if (regs_user.regs) {
> @@ -1758,12 +1775,26 @@ static void x86_pmu_perf_get_regs_user(struct perf_sample_data *data,
> data->regs_user.regs = NULL;
> }
>
> +static inline void
> +x86_pmu_update_xregs_size(struct perf_event_attr *attr,
> + struct perf_sample_data *data,
> + struct pt_regs *regs,
> + u64 mask, u64 pred_mask)
> +{
> + u16 pred_qwords = attr->sample_simd_pred_reg_qwords;
> + u16 vec_qwords = attr->sample_simd_vec_reg_qwords;
> +
> + data->dyn_size += (hweight64(mask) * vec_qwords +
> + hweight64(pred_mask) * pred_qwords) * sizeof(u64);
> +}
> +
> static void x86_pmu_setup_gpregs_data(struct perf_event *event,
> struct perf_sample_data *data,
> struct pt_regs *regs)
> {
> struct perf_event_attr *attr = &event->attr;
> u64 sample_type = attr->sample_type;
> + struct x86_perf_regs *perf_regs;
>
> if (sample_type & PERF_SAMPLE_REGS_USER) {
> if (user_mode(regs)) {
> @@ -1783,8 +1814,13 @@ static void x86_pmu_setup_gpregs_data(struct perf_event *event,
> data->regs_user.regs = NULL;
> }
> data->dyn_size += sizeof(u64);
> - if (data->regs_user.regs)
> - data->dyn_size += hweight64(attr->sample_regs_user) * sizeof(u64);
> + if (data->regs_user.regs) {
> + data->dyn_size +=
> + hweight64(attr->sample_regs_user) * sizeof(u64);
> + perf_regs = container_of(data->regs_user.regs,
> + struct x86_perf_regs, regs);
> + perf_regs->abi = data->regs_user.abi;
> + }
> data->sample_flags |= PERF_SAMPLE_REGS_USER;
> }
>
> @@ -1792,8 +1828,13 @@ static void x86_pmu_setup_gpregs_data(struct perf_event *event,
> data->regs_intr.regs = regs;
> data->regs_intr.abi = perf_reg_abi(current);
> data->dyn_size += sizeof(u64);
> - if (data->regs_intr.regs)
> - data->dyn_size += hweight64(attr->sample_regs_intr) * sizeof(u64);
> + if (data->regs_intr.regs) {
> + data->dyn_size +=
> + hweight64(attr->sample_regs_intr) * sizeof(u64);
> + perf_regs = container_of(data->regs_intr.regs,
> + struct x86_perf_regs, regs);
> + perf_regs->abi = data->regs_intr.abi;
> + }
> data->sample_flags |= PERF_SAMPLE_REGS_INTR;
> }
> }
> @@ -1871,7 +1912,7 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
> if (WARN_ON_ONCE(!xsave))
> return;
>
> - if (event_has_extended_regs(event))
> + if (event_needs_xmm(event))
> mask |= XFEATURE_MASK_SSE;
>
> mask &= x86_pmu.ext_regs_mask;
> @@ -1899,6 +1940,43 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
> }
> }
>
> +static void x86_pmu_setup_xregs_data(struct perf_event *event,
> + struct perf_sample_data *data)
> +{
> + struct perf_event_attr *attr = &event->attr;
> + u64 sample_type = attr->sample_type;
> + struct x86_perf_regs *perf_regs;
> +
> + if (!attr->sample_simd_regs_enabled)
> + return;
> +
> + if (sample_type & PERF_SAMPLE_REGS_USER && data->regs_user.abi) {
> + perf_regs = container_of(data->regs_user.regs,
> + struct x86_perf_regs, regs);
> + perf_regs->abi |= PERF_SAMPLE_REGS_ABI_SIMD;
> +
> + /* num and qwords of vector and pred registers */
> + data->dyn_size += sizeof(u64);
> + data->regs_user.abi |= PERF_SAMPLE_REGS_ABI_SIMD;
> + x86_pmu_update_xregs_size(attr, data, data->regs_user.regs,
> + attr->sample_simd_vec_reg_user,
> + attr->sample_simd_pred_reg_user);
> + }
> +
> + if (sample_type & PERF_SAMPLE_REGS_INTR && data->regs_intr.abi) {
> + perf_regs = container_of(data->regs_intr.regs,
> + struct x86_perf_regs, regs);
> + perf_regs->abi |= PERF_SAMPLE_REGS_ABI_SIMD;
> +
> + /* num and qwords of vector and pred registers */
> + data->dyn_size += sizeof(u64);
> + data->regs_intr.abi |= PERF_SAMPLE_REGS_ABI_SIMD;
> + x86_pmu_update_xregs_size(attr, data, data->regs_intr.regs,
> + attr->sample_simd_vec_reg_intr,
> + attr->sample_simd_pred_reg_intr);
> + }
> +}
> +
> void x86_pmu_setup_regs_data(struct perf_event *event,
> struct perf_sample_data *data,
> struct pt_regs *regs,
> @@ -1910,6 +1988,7 @@ void x86_pmu_setup_regs_data(struct perf_event *event,
> * which are unnecessary to sample again.
> */
> x86_pmu_sample_xregs(event, data, ignore_mask);
> + x86_pmu_setup_xregs_data(event, data);
> }
>
> int x86_pmu_handle_irq(struct pt_regs *regs)
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index 74a41dae8a62..ac9a1c2f0177 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -1743,7 +1743,7 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
> if (gprs || (attr->precise_ip < 2) || tsx_weight)
> pebs_data_cfg |= PEBS_DATACFG_GP;
>
> - if (event_has_extended_regs(event))
> + if (event_needs_xmm(event))
> pebs_data_cfg |= PEBS_DATACFG_XMMS;
>
> if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index a5e5bffb711e..26d162794a36 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -137,6 +137,18 @@ static inline bool is_acr_event_group(struct perf_event *event)
> return check_leader_group(event->group_leader, PERF_X86_EVENT_ACR);
> }
>
> +static inline bool event_needs_xmm(struct perf_event *event)
> +{
> + if (event->attr.sample_simd_regs_enabled &&
> + event->attr.sample_simd_vec_reg_qwords >= PERF_X86_XMM_QWORDS)
> + return true;
> +
> + if (!event->attr.sample_simd_regs_enabled &&
> + event_has_extended_regs(event))
> + return true;
> + return false;
> +}
> +
> struct amd_nb {
> int nb_id; /* NorthBridge id */
> int refcnt; /* reference count */
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index e47a963a7cf0..e54d21c13494 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -726,6 +726,7 @@ extern void perf_events_lapic_init(void);
> struct pt_regs;
> struct x86_perf_regs {
> struct pt_regs regs;
> + u64 abi;
> union {
> u64 *xmm_regs;
> u32 *xmm_space; /* for xsaves */
> diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
> index 7c9d2bb3833b..c5c1b3930df1 100644
> --- a/arch/x86/include/uapi/asm/perf_regs.h
> +++ b/arch/x86/include/uapi/asm/perf_regs.h
> @@ -55,4 +55,17 @@ enum perf_event_x86_regs {
>
> #define PERF_REG_EXTENDED_MASK (~((1ULL << PERF_REG_X86_XMM0) - 1))
>
> +enum {
> + PERF_X86_SIMD_XMM_REGS = 16,
> + PERF_X86_SIMD_VEC_REGS_MAX = PERF_X86_SIMD_XMM_REGS,
> +};
> +
> +#define PERF_X86_SIMD_VEC_MASK GENMASK_ULL(PERF_X86_SIMD_VEC_REGS_MAX - 1, 0)
> +
> +enum {
> + /* 1 qword = 8 bytes */
> + PERF_X86_XMM_QWORDS = 2,
> + PERF_X86_SIMD_QWORDS_MAX = PERF_X86_XMM_QWORDS,
> +};
> +
> #endif /* _ASM_X86_PERF_REGS_H */
> diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
> index 81204cb7f723..9947a6b5c260 100644
> --- a/arch/x86/kernel/perf_regs.c
> +++ b/arch/x86/kernel/perf_regs.c
> @@ -63,6 +63,9 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
>
> if (idx >= PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) {
> perf_regs = container_of(regs, struct x86_perf_regs, regs);
> + /* SIMD registers are moved to dedicated sample_simd_vec_reg */
> + if (perf_regs->abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + return 0;
> if (!perf_regs->xmm_regs)
> return 0;
> return perf_regs->xmm_regs[idx - PERF_REG_X86_XMM0];
> @@ -74,6 +77,51 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
> return regs_get_register(regs, pt_regs_offset[idx]);
> }
>
> +u64 perf_simd_reg_value(struct pt_regs *regs, int idx,
> + u16 qwords_idx, bool pred)
> +{
> + struct x86_perf_regs *perf_regs =
> + container_of(regs, struct x86_perf_regs, regs);
> +
> + if (pred)
> + return 0;
> +
> + if (WARN_ON_ONCE(idx >= PERF_X86_SIMD_VEC_REGS_MAX ||
> + qwords_idx >= PERF_X86_SIMD_QWORDS_MAX))
> + return 0;
> +
> + if (qwords_idx < PERF_X86_XMM_QWORDS) {
> + if (!perf_regs->xmm_regs)
> + return 0;
> + return perf_regs->xmm_regs[idx * PERF_X86_XMM_QWORDS +
> + qwords_idx];
> + }
> +
> + return 0;
> +}
> +
> +int perf_simd_reg_validate(u16 vec_qwords, u64 vec_mask,
> + u16 pred_qwords, u32 pred_mask)
> +{
> + /* pred_qwords implies sample_simd_{pred,vec}_reg_* are supported */
> + if (!pred_qwords)
> + return 0;
Sashiko comments
"
Does this early return completely bypass validation for vector registers
when pred_qwords is 0?
Since x86 does not require predicate registers for standard XMM sampling, it
appears an unprivileged user can set sample_simd_pred_reg_qwords to 0 while
supplying arbitrarily large values for sample_simd_vec_reg_qwords and
sample_simd_vec_reg_user.
Could this bypass bounds checks and lead to a massive dyn_size calculation
during a perf NMI, causing ring buffer corruption or an NMI watchdog hard
lockup?
"
It partly makes sense. Would enhance the perf_simd_reg_validate() to cover
more cases.
> +
> + if (!vec_qwords) {
> + if (vec_mask)
> + return -EINVAL;
> + } else {
> + if (vec_qwords != PERF_X86_XMM_QWORDS)
> + return -EINVAL;
> + if (vec_mask & ~PERF_X86_SIMD_VEC_MASK)
> + return -EINVAL;
> + }
> + if (pred_mask)
> + return -EINVAL;
> +
> + return 0;
> +}
> +
> #define PERF_REG_X86_RESERVED (((1ULL << PERF_REG_X86_XMM0) - 1) & \
> ~((1ULL << PERF_REG_X86_MAX) - 1))
>
> @@ -108,7 +156,8 @@ u64 perf_reg_abi(struct task_struct *task)
>
> int perf_reg_validate(u64 mask)
> {
> - if (!mask || (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED)))
> + /* The mask could be 0 if only the SIMD registers are interested */
> + if (mask & (REG_NOSUPPORT | PERF_REG_X86_RESERVED))
> return -EINVAL;
>
> return 0;
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 20/24] perf/x86: Enable SSP sampling using sample_regs_* fields
2026-03-24 0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
@ 2026-03-25 9:25 ` Mi, Dapeng
0 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 9:25 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao,
Kan Liang
On 3/24/2026 8:41 AM, Dapeng Mi wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
>
> This patch enables sampling of CET SSP register via the sample_regs_*
> fields.
>
> To sample SSP, the sample_simd_regs_enabled field must be set. This
> allows the spare space (reclaimed from the original XMM space) in the
> sample_regs_* fields to be used for representing SSP.
>
> Similar with eGPRs sampling, the perf_reg_value() function needs to
> check if the PERF_SAMPLE_REGS_ABI_SIMD flag is set first, and then
> determine whether to output SSP or legacy XMM registers to userspace.
>
> Additionally, arch-PEBS supports sampling SSP, which is placed into the
> GPRs group. This patch also enables arch-PEBS-based SSP sampling.
>
> Currently, SSP sampling is only supported on the x86_64 architecture, as
> CET is only available on x86_64 platforms.
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> ---
> arch/x86/events/core.c | 9 +++++++++
> arch/x86/events/intel/ds.c | 8 ++++++++
> arch/x86/events/perf_event.h | 10 ++++++++++
> arch/x86/include/asm/perf_event.h | 4 ++++
> arch/x86/include/uapi/asm/perf_regs.h | 7 ++++---
> arch/x86/kernel/perf_regs.c | 5 +++++
> 6 files changed, 40 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index d33cfbe38573..ea451b48b9d6 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -712,6 +712,10 @@ int x86_pmu_hw_config(struct perf_event *event)
> if (event_needs_egprs(event) &&
> !(x86_pmu.ext_regs_mask & XFEATURE_MASK_APX))
> return -EINVAL;
> + if (event_needs_ssp(event) &&
> + !(x86_pmu.ext_regs_mask & XFEATURE_MASK_CET_USER))
> + return -EINVAL;
> +
> /* Not require any vector registers but set width */
> if (event->attr.sample_simd_vec_reg_qwords &&
> !event->attr.sample_simd_vec_reg_intr &&
> @@ -1871,6 +1875,7 @@ inline void x86_pmu_clear_perf_regs(struct pt_regs *regs)
> perf_regs->h16zmm_regs = NULL;
> perf_regs->opmask_regs = NULL;
> perf_regs->egpr_regs = NULL;
> + perf_regs->cet_regs = NULL;
> }
>
> static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
> @@ -1896,6 +1901,8 @@ static inline void x86_pmu_update_xregs(struct x86_perf_regs *perf_regs,
> perf_regs->opmask = get_xsave_addr(xsave, XFEATURE_OPMASK);
> if (mask & XFEATURE_MASK_APX)
> perf_regs->egpr = get_xsave_addr(xsave, XFEATURE_APX);
> + if (mask & XFEATURE_MASK_CET_USER)
> + perf_regs->cet = get_xsave_addr(xsave, XFEATURE_CET_USER);
> }
>
> /*
> @@ -1961,6 +1968,8 @@ static void x86_pmu_sample_xregs(struct perf_event *event,
> mask |= XFEATURE_MASK_OPMASK;
> if (event_needs_egprs(event))
> mask |= XFEATURE_MASK_APX;
> + if (event_needs_ssp(event))
> + mask |= XFEATURE_MASK_CET_USER;
>
> mask &= x86_pmu.ext_regs_mask;
> if ((sample_type & PERF_SAMPLE_REGS_USER) && data->regs_user.abi)
> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
> index ac9a1c2f0177..3a2fb623e0ab 100644
> --- a/arch/x86/events/intel/ds.c
> +++ b/arch/x86/events/intel/ds.c
> @@ -2685,6 +2685,14 @@ static void setup_arch_pebs_sample_data(struct perf_event *event,
> __setup_pebs_gpr_group(event, data, regs,
> (struct pebs_gprs *)gprs,
> sample_type);
> +
> + /* Currently only user space mode enables SSP. */
> + if (user_mode(regs) && (sample_type &
> + (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))) {
> + /* Point to r15 so that cet_regs[1] = ssp. */
> + perf_regs->cet_regs = &gprs->r15;
Sashiko comments
"
Is this relying on undefined behavior?
Treating the scalar struct member r15 as an array and accessing it via
cet_regs[1] can cause compilers with strict object bounds checking
(like -fsanitize=bounds) to trap. It also creates a brittle coupling
between the memory layout of struct arch_pebs_gprs and struct
cet_user_state.
"
OK. Would remove cet_regs pointer and directly introduce a "u64 *ssp"
pointer to record the SSP address.
> + ignore_mask = XFEATURE_MASK_CET_USER;
Sashiko comments
"
Should this be a bitwise OR (ignore_mask |= XFEATURE_MASK_CET_USER)?
Since setup_arch_pebs_sample_data() processes PEBS fragments in a loop,
overwriting ignore_mask with '=' instead of '|=' might lose previously set
bits from earlier fragments, such as XFEATURE_MASK_SSE from the xmm block.
This could cause x86_pmu_setup_regs_data() to unnecessarily read registers
from XSAVE and provide stale sample data.
"
In theory, it should not happen since there should be only 1 GPRs group
even in multiple PEBS fragments. But for the consistency, would follow the
comments. Thanks.
> + }
> }
>
> if (header->aux) {
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index 0974fd8b0e20..36688d28407f 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -197,6 +197,16 @@ static inline bool event_needs_egprs(struct perf_event *event)
> return false;
> }
>
> +static inline bool event_needs_ssp(struct perf_event *event)
> +{
> + if (event->attr.sample_simd_regs_enabled &&
> + (event->attr.sample_regs_user & BIT_ULL(PERF_REG_X86_SSP) ||
> + event->attr.sample_regs_intr & BIT_ULL(PERF_REG_X86_SSP)))
> + return true;
> +
> + return false;
> +}
> +
> struct amd_nb {
> int nb_id; /* NorthBridge id */
> int refcnt; /* reference count */
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index a54ea8fa6a04..0c6d58e6c98f 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -751,6 +751,10 @@ struct x86_perf_regs {
> u64 *egpr_regs;
> struct apx_state *egpr;
> };
> + union {
> + u64 *cet_regs;
> + struct cet_user_state *cet;
> + };
> };
>
> extern unsigned long perf_arch_instruction_pointer(struct pt_regs *regs);
> diff --git a/arch/x86/include/uapi/asm/perf_regs.h b/arch/x86/include/uapi/asm/perf_regs.h
> index e721a47556d4..98a5b6c8e24c 100644
> --- a/arch/x86/include/uapi/asm/perf_regs.h
> +++ b/arch/x86/include/uapi/asm/perf_regs.h
> @@ -28,10 +28,10 @@ enum perf_event_x86_regs {
> PERF_REG_X86_R14,
> PERF_REG_X86_R15,
> /*
> - * The eGPRs and XMM have overlaps. Only one can be used
> + * The eGPRs/SSP and XMM have overlaps. Only one can be used
> * at a time. The ABI PERF_SAMPLE_REGS_ABI_SIMD is used to
> * distinguish which one is used. If PERF_SAMPLE_REGS_ABI_SIMD
> - * is set, then eGPRs is used, otherwise, XMM is used.
> + * is set, then eGPRs/SSP is used, otherwise, XMM is used.
> *
> * Extended GPRs (eGPRs)
> */
> @@ -51,10 +51,11 @@ enum perf_event_x86_regs {
> PERF_REG_X86_R29,
> PERF_REG_X86_R30,
> PERF_REG_X86_R31,
> + PERF_REG_X86_SSP,
> /* These are the limits for the GPRs. */
> PERF_REG_X86_32_MAX = PERF_REG_X86_GS + 1,
> PERF_REG_X86_64_MAX = PERF_REG_X86_R15 + 1,
> - PERF_REG_MISC_MAX = PERF_REG_X86_R31 + 1,
> + PERF_REG_MISC_MAX = PERF_REG_X86_SSP + 1,
>
> /* These all need two bits set because they are 128bit */
> PERF_REG_X86_XMM0 = 32,
> diff --git a/arch/x86/kernel/perf_regs.c b/arch/x86/kernel/perf_regs.c
> index a34cc52dbbeb..9715d1f90313 100644
> --- a/arch/x86/kernel/perf_regs.c
> +++ b/arch/x86/kernel/perf_regs.c
> @@ -70,6 +70,11 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
> return 0;
> return perf_regs->egpr_regs[idx - PERF_REG_X86_R16];
> }
> + if (idx == PERF_REG_X86_SSP) {
> + if (!perf_regs->cet_regs)
> + return 0;
> + return perf_regs->cet_regs[1];
> + }
> } else {
> if (idx >= PERF_REG_X86_XMM0 && idx < PERF_REG_X86_XMM_MAX) {
> if (!perf_regs->xmm_regs)
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
` (24 preceding siblings ...)
2026-03-24 1:08 ` [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
@ 2026-03-25 9:41 ` Mi, Dapeng
25 siblings, 0 replies; 33+ messages in thread
From: Mi, Dapeng @ 2026-03-25 9:41 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Thomas Gleixner, Dave Hansen, Ian Rogers,
Adrian Hunter, Jiri Olsa, Alexander Shishkin, Andi Kleen,
Eranian Stephane
Cc: Mark Rutland, broonie, Ravi Bangoria, linux-kernel,
linux-perf-users, Zide Chen, Falcon Thomas, Dapeng Mi, Xudong Hao
All comments from Sashiko
(https://sashiko.dev/#/patchset/20260324004118.3772171-1-dapeng1.mi%40linux.intel.com)
have been reviewed.
About 70% comments make sense and are posted the individual patches. About
10% comments are correct but have nothing to do with this patch series,
like "intel_pmu_init_hybrid() fails to allocate x86_pmu.hybrid_pmu, but
current code doesn't check if x86_pmu.hybrid_pmu is null before accessing
it". The left ~20% comments are not correct and would be ignored.
All the correct comments would be addressed in the next version. Thanks.
On 3/24/2026 8:40 AM, Dapeng Mi wrote:
> Changes since V6:
> - Fix potential overwritten issue in hybrid PMU structure (patch 01/24)
> - Restrict PEBS events work on GP counters if no PEBS baseline suggested
> (patch 02/24)
> - Use per-cpu x86_intr_regs for perf_event_nmi_handler() instead of
> temporary variable (patch 06/24)
> - Add helper update_fpu_state_and_flag() to ensure TIF_NEED_FPU_LOAD is
> set after save_fpregs_to_fpstate() call (patch 09/24)
> - Optimize and simplify x86_pmu_sample_xregs(), etc. (patch 11/24)
> - Add macro word_for_each_set_bit() to simplify u64 set-bit iteration
> (patch 13/24)
> - Add sanity check for PEBS fragment size (patch 24/24)
>
> Changes since V5:
> - Introduce 3 commits to fix newly found PEBS issues (Patch 01~03/19)
> - Address Peter comments, including,
> * Fully support user-regs sampling of the SIMD/eGPRs/SSP registers
> * Adjust newly added fields in perf_event_attr to avoid holes
> * Fix the endian issue introduced by for_each_set_bit() in
> event/core.c
> * Remove some unnecessary macros from UAPI header perf_regs.h
> * Enhance b2b NMI detection for all PEBS handlers to ensure identical
> behaviors of all PEBS handlers
> - Split perf-tools patches which would be posted in a separate patchset
> later
>
> Changes since V4:
> - Rewrite some functions comments and commit messages (Dave)
> - Add arch-PEBS based SIMD/eGPRs/SSP sampling support (Patch 15/19)
> - Fix "suspecious NMI" warnning observed on PTL/NVL P-core and DMR by
> activating back-to-back NMI detection mechanism (Patch 16/19)
> - Fix some minor issues on perf-tool patches (Patch 18/19)
>
> Changes since V3:
> - Drop the SIMD registers if an NMI hits kernel mode for REGS_USER.
> - Only dump the available regs, rather than zero and dump the
> unavailable regs. It's possible that the dumped registers are a subset
> of the requested registers.
> - Some minor updates to address Dapeng's comments in V3.
>
> Changes since V2:
> - Use the FPU format for the x86_pmu.ext_regs_mask as well
> - Add a check before invoking xsaves_nmi()
> - Add perf_simd_reg_check() to retrieve the number of available
> registers. If the kernel fails to get the requested registers, e.g.,
> XSAVES fails, nothing dumps to the userspace (the V2 dumps all 0s).
> - Add POC perf tool patches
>
> Changes since V1:
> - Apply the new interfaces to configure and dump the SIMD registers
> - Utilize the existing FPU functions, e.g., xstate_calculate_size,
> get_xsave_addr().
>
> Starting from Intel Ice Lake, XMM registers can be collected in a PEBS
> record. Future Architecture PEBS will include additional registers such
> as YMM, ZMM, OPMASK, SSP and APX eGPRs, contingent on hardware support.
>
> This patch set introduces a software solution to mitigate the hardware
> requirement by utilizing the XSAVES command to retrieve the requested
> registers in the overflow handler. This feature is no longer limited to
> PEBS events or specific platforms. While the hardware solution remains
> preferable due to its lower overhead and higher accuracy, this software
> approach provides a viable alternative.
>
> The solution is theoretically compatible with all x86 platforms but is
> currently enabled on newer platforms, including Sapphire Rapids and
> later P-core server platforms, Sierra Forest and later E-core server
> platforms and recent Client platforms, like Arrow Lake, Panther Lake and
> Nova Lake.
>
> Newly supported registers include YMM, ZMM, OPMASK, SSP, and APX eGPRs.
> Due to space constraints in sample_regs_user/intr, new fields have been
> introduced in the perf_event_attr structure to accommodate these
> registers.
>
> After a long discussion in V1,
> https://lore.kernel.org/lkml/3f1c9a9e-cb63-47ff-a5e9-06555fa6cc9a@linux.intel.com/
> The below new fields are introduced.
>
> @@ -547,6 +549,25 @@ struct perf_event_attr {
>
> __u64 config3; /* extension of config2 */
> __u64 config4; /* extension of config3 */
> +
> + /*
> + * Defines set of SIMD registers to dump on samples.
> + * The sample_simd_regs_enabled !=0 implies the
> + * set of SIMD registers is used to config all SIMD registers.
> + * If !sample_simd_regs_enabled, sample_regs_XXX may be used to
> + * config some SIMD registers on X86.
> + */
> + union {
> + __u16 sample_simd_regs_enabled;
> + __u16 sample_simd_pred_reg_qwords;
> + };
> + __u16 sample_simd_vec_reg_qwords;
> + __u32 __reserved_4;
> +
> + __u32 sample_simd_pred_reg_intr;
> + __u32 sample_simd_pred_reg_user;
> + __u64 sample_simd_vec_reg_intr;
> + __u64 sample_simd_vec_reg_user;
> };
>
> /*
> @@ -1020,7 +1041,15 @@ enum perf_event_type {
> * } && PERF_SAMPLE_BRANCH_STACK
> *
> * { u64 abi; # enum perf_sample_regs_abi
> - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
> + * u64 regs[weight(mask)];
> + * struct {
> + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_user)
> + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
> + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_user)
> + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
> + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
> + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + * } && PERF_SAMPLE_REGS_USER
> *
> * { u64 size;
> * char data[size];
> @@ -1047,7 +1076,15 @@ enum perf_event_type {
> * { u64 data_src; } && PERF_SAMPLE_DATA_SRC
> * { u64 transaction; } && PERF_SAMPLE_TRANSACTION
> * { u64 abi; # enum perf_sample_regs_abi
> - * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
> + * u64 regs[weight(mask)];
> + * struct {
> + * u16 nr_vectors; # 0 ... weight(sample_simd_vec_reg_intr)
> + * u16 vector_qwords; # 0 ... sample_simd_vec_reg_qwords
> + * u16 nr_pred; # 0 ... weight(sample_simd_pred_reg_intr)
> + * u16 pred_qwords; # 0 ... sample_simd_pred_reg_qwords
> + * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
> + * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
> + * } && PERF_SAMPLE_REGS_INTR
> * { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
> * { u64 cgroup;} && PERF_SAMPLE_CGROUP
> * { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
>
>
> To maintain simplicity, a single field, sample_{simd|pred}_vec_reg_qwords,
> is introduced to indicate register width. For example:
> - sample_simd_vec_reg_qwords = 2 for XMM registers (128 bits) on x86
> - sample_simd_vec_reg_qwords = 4 for YMM registers (256 bits) on x86
>
> Four additional fields, sample_{simd|pred}_vec_reg_{intr|user}, represent
> the bitmap of sampling registers. For instance, the bitmap for x86
> XMM registers is 0xffff (16 XMM registers). Although users can
> theoretically sample a subset of registers, the current perf-tool
> implementation supports sampling all registers of each type to avoid
> complexity.
>
> A new ABI, PERF_SAMPLE_REGS_ABI_SIMD, is introduced to signal user space
> tools about the presence of SIMD registers in sampling records. When this
> flag is detected, tools should recognize that extra SIMD register data
> follows the general register data. The layout of the extra SIMD register
> data is displayed as follow.
>
> u16 nr_vectors;
> u16 vector_qwords;
> u16 nr_pred;
> u16 pred_qwords;
> u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
>
> With this patch set, sampling for the aforementioned registers is
> supported on the Intel Nova Lake platform.
>
> Examples:
> $perf record -I?
> available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10
> R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28
> R29 R30 R31 SSP XMM0-15 YMM0-15 ZMM0-31 OPMASK0-7
>
> $perf record --user-regs=?
> available registers: AX BX CX DX SI DI BP SP IP FLAGS CS SS R8 R9 R10
> R11 R12 R13 R14 R15 R16 R17 R18 R19 R20 R21 R22 R23 R24 R25 R26 R27 R28
> R29 R30 R31 SSP XMM0-15 YMM0-15 ZMM0-31 OPMASK0-7
>
> $perf record -e branches:p -Iax,bx,r8,r16,r31,ssp,xmm,ymm,zmm,opmask -c 100000 ./test
> $perf report -D
>
> ... ...
> 14027761992115 0xcf30 [0x8a8]: PERF_RECORD_SAMPLE(IP, 0x1): 29964/29964:
> 0xffffffff9f085e24 period: 100000 addr: 0
> ... intr regs: mask 0x18001010003 ABI 64-bit
> .... AX 0xdffffc0000000000
> .... BX 0xffff8882297685e8
> .... R8 0x0000000000000000
> .... R16 0x0000000000000000
> .... R31 0x0000000000000000
> .... SSP 0x0000000000000000
> ... SIMD ABI nr_vectors 32 vector_qwords 8 nr_pred 8 pred_qwords 1
> .... ZMM [0] 0xffffffffffffffff
> .... ZMM [0] 0x0000000000000001
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [0] 0x0000000000000000
> .... ZMM [1] 0x003a6b6165506d56
> ... ...
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... ZMM [31] 0x0000000000000000
> .... OPMASK[0] 0x00000000fffffe00
> .... OPMASK[1] 0x0000000000ffffff
> .... OPMASK[2] 0x000000000000007f
> .... OPMASK[3] 0x0000000000000000
> .... OPMASK[4] 0x0000000000010080
> .... OPMASK[5] 0x0000000000000000
> .... OPMASK[6] 0x0000400004000000
> .... OPMASK[7] 0x0000000000000000
> ... ...
>
>
> History:
> v6: https://lore.kernel.org/all/20260209072047.2180332-1-dapeng1.mi@linux.intel.com/
> v5: https://lore.kernel.org/all/20251203065500.2597594-1-dapeng1.mi@linux.intel.com/
> v4: https://lore.kernel.org/all/20250925061213.178796-1-dapeng1.mi@linux.intel.com/
> v3: https://lore.kernel.org/lkml/20250815213435.1702022-1-kan.liang@linux.intel.com/
> v2: https://lore.kernel.org/lkml/20250626195610.405379-1-kan.liang@linux.intel.com/
> v1: https://lore.kernel.org/lkml/20250613134943.3186517-1-kan.liang@linux.intel.com/
>
> Dapeng Mi (12):
> perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu()
> perf/x86/intel: Avoid PEBS event on fixed counters without extended
> PEBS
> perf/x86/intel: Enable large PEBS sampling for XMMs
> perf/x86/intel: Convert x86_perf_regs to per-cpu variables
> perf: Eliminate duplicate arch-specific functions definations
> x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state
> perf/x86: Enable XMM Register Sampling for Non-PEBS Events
> perf/x86: Enable XMM register sampling for REGS_USER case
> perf: Enhance perf_reg_validate() with simd_enabled argument
> perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling
> perf/x86: Activate back-to-back NMI detection for arch-PEBS induced
> NMIs
> perf/x86/intel: Add sanity check for PEBS fragment size
>
> Kan Liang (12):
> perf/x86: Use x86_perf_regs in the x86 nmi handler
> perf/x86: Introduce x86-specific x86_pmu_setup_regs_data()
> x86/fpu/xstate: Add xsaves_nmi() helper
> perf: Move and rename has_extended_regs() for ARCH-specific use
> perf: Add sampling support for SIMD registers
> perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields
> perf/x86: Enable YMM sampling using sample_simd_vec_reg_* fields
> perf/x86: Enable ZMM sampling using sample_simd_vec_reg_* fields
> perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields
> perf/x86: Enable eGPRs sampling using sample_regs_* fields
> perf/x86: Enable SSP sampling using sample_regs_* fields
> perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability
>
> arch/arm/kernel/perf_regs.c | 8 +-
> arch/arm64/kernel/perf_regs.c | 8 +-
> arch/csky/kernel/perf_regs.c | 8 +-
> arch/loongarch/kernel/perf_regs.c | 8 +-
> arch/mips/kernel/perf_regs.c | 8 +-
> arch/parisc/kernel/perf_regs.c | 8 +-
> arch/powerpc/perf/perf_regs.c | 2 +-
> arch/riscv/kernel/perf_regs.c | 8 +-
> arch/s390/kernel/perf_regs.c | 2 +-
> arch/x86/events/core.c | 392 +++++++++++++++++++++++++-
> arch/x86/events/intel/core.c | 127 ++++++++-
> arch/x86/events/intel/ds.c | 195 ++++++++++---
> arch/x86/events/perf_event.h | 85 +++++-
> arch/x86/include/asm/fpu/sched.h | 5 +-
> arch/x86/include/asm/fpu/xstate.h | 3 +
> arch/x86/include/asm/msr-index.h | 7 +
> arch/x86/include/asm/perf_event.h | 38 ++-
> arch/x86/include/uapi/asm/perf_regs.h | 51 ++++
> arch/x86/kernel/fpu/core.c | 27 +-
> arch/x86/kernel/fpu/xstate.c | 25 +-
> arch/x86/kernel/perf_regs.c | 134 +++++++--
> include/linux/perf_event.h | 16 ++
> include/linux/perf_regs.h | 36 +--
> include/uapi/linux/perf_event.h | 50 +++-
> kernel/events/core.c | 138 +++++++--
> tools/perf/util/header.c | 3 +-
> 26 files changed, 1193 insertions(+), 199 deletions(-)
>
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2026-03-25 9:41 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-24 0:40 [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Dapeng Mi
2026-03-24 0:40 ` [Patch v7 01/24] perf/x86: Move hybrid PMU initialization before x86_pmu_starting_cpu() Dapeng Mi
2026-03-24 0:40 ` [Patch v7 02/24] perf/x86/intel: Avoid PEBS event on fixed counters without extended PEBS Dapeng Mi
2026-03-24 0:40 ` [Patch v7 03/24] perf/x86/intel: Enable large PEBS sampling for XMMs Dapeng Mi
2026-03-24 0:40 ` [Patch v7 04/24] perf/x86/intel: Convert x86_perf_regs to per-cpu variables Dapeng Mi
2026-03-24 0:40 ` [Patch v7 05/24] perf: Eliminate duplicate arch-specific functions definations Dapeng Mi
2026-03-24 0:41 ` [Patch v7 06/24] perf/x86: Use x86_perf_regs in the x86 nmi handler Dapeng Mi
2026-03-24 0:41 ` [Patch v7 07/24] perf/x86: Introduce x86-specific x86_pmu_setup_regs_data() Dapeng Mi
2026-03-25 5:18 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 08/24] x86/fpu/xstate: Add xsaves_nmi() helper Dapeng Mi
2026-03-24 0:41 ` [Patch v7 09/24] x86/fpu: Ensure TIF_NEED_FPU_LOAD is set after saving FPU state Dapeng Mi
2026-03-24 0:41 ` [Patch v7 10/24] perf: Move and rename has_extended_regs() for ARCH-specific use Dapeng Mi
2026-03-24 0:41 ` [Patch v7 11/24] perf/x86: Enable XMM Register Sampling for Non-PEBS Events Dapeng Mi
2026-03-25 7:30 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 12/24] perf/x86: Enable XMM register sampling for REGS_USER case Dapeng Mi
2026-03-25 7:58 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 13/24] perf: Add sampling support for SIMD registers Dapeng Mi
2026-03-25 8:44 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 14/24] perf/x86: Enable XMM sampling using sample_simd_vec_reg_* fields Dapeng Mi
2026-03-25 9:01 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 15/24] perf/x86: Enable YMM " Dapeng Mi
2026-03-24 0:41 ` [Patch v7 16/24] perf/x86: Enable ZMM " Dapeng Mi
2026-03-24 0:41 ` [Patch v7 17/24] perf/x86: Enable OPMASK sampling using sample_simd_pred_reg_* fields Dapeng Mi
2026-03-24 0:41 ` [Patch v7 18/24] perf: Enhance perf_reg_validate() with simd_enabled argument Dapeng Mi
2026-03-24 0:41 ` [Patch v7 19/24] perf/x86: Enable eGPRs sampling using sample_regs_* fields Dapeng Mi
2026-03-24 0:41 ` [Patch v7 20/24] perf/x86: Enable SSP " Dapeng Mi
2026-03-25 9:25 ` Mi, Dapeng
2026-03-24 0:41 ` [Patch v7 21/24] perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS capability Dapeng Mi
2026-03-24 0:41 ` [Patch v7 22/24] perf/x86/intel: Enable arch-PEBS based SIMD/eGPRs/SSP sampling Dapeng Mi
2026-03-24 0:41 ` [Patch v7 23/24] perf/x86: Activate back-to-back NMI detection for arch-PEBS induced NMIs Dapeng Mi
2026-03-24 0:41 ` [Patch v7 24/24] perf/x86/intel: Add sanity check for PEBS fragment size Dapeng Mi
2026-03-24 1:08 ` [Patch v7 00/24] Support SIMD/eGPRs/SSP registers sampling for perf Mi, Dapeng
2026-03-25 9:41 ` Mi, Dapeng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox