linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: kan.liang@linux.intel.com, peterz@infradead.org,
	mingo@redhat.com, acme@kernel.org, namhyung@kernel.org,
	tglx@linutronix.de, dave.hansen@linux.intel.com,
	irogers@google.com, adrian.hunter@intel.com, jolsa@kernel.org,
	alexander.shishkin@linux.intel.com, linux-kernel@vger.kernel.org
Cc: dapeng1.mi@linux.intel.com, ak@linux.intel.com, zide.chen@intel.com
Subject: Re: [RFC PATCH 05/12] perf/x86: Support XMM register for non-PEBS and REGS_USER
Date: Fri, 13 Jun 2025 08:15:02 -0700	[thread overview]
Message-ID: <368e7626-c9bd-47be-bb42-f542dc3d67b7@intel.com> (raw)
In-Reply-To: <20250613134943.3186517-6-kan.liang@linux.intel.com>

> +static DEFINE_PER_CPU(void *, ext_regs_buf);

This should probably use one of the types in asm/fpu/types.h, not void*.

> +static void x86_pmu_get_ext_regs(struct x86_perf_regs *perf_regs, u64 mask)
> +{
> +	void *xsave = (void *)ALIGN((unsigned long)per_cpu(ext_regs_buf, smp_processor_id()), 64);

I'd just align the allocation to avoid having to align it at runtime
like this.

> +	struct xregs_state *xregs_xsave = xsave;
> +	u64 xcomp_bv;
> +
> +	if (WARN_ON_ONCE(!xsave))
> +		return;
> +
> +	xsaves_nmi(xsave, mask);
> +
> +	xcomp_bv = xregs_xsave->header.xcomp_bv;
> +	if (mask & XFEATURE_MASK_SSE && xcomp_bv & XFEATURE_SSE)
> +		perf_regs->xmm_regs = (u64 *)xregs_xsave->i387.xmm_space;
> +}

Could we please align the types on:

	perf_regs->xmm_regs
and
	xregs_xsave->i387.xmm_space

so that no casting is required?

> +static void reserve_ext_regs_buffers(void)
> +{
> +	size_t size;
> +	int cpu;
> +
> +	if (!x86_pmu.ext_regs_mask)
> +		return;
> +
> +	size = FXSAVE_SIZE + XSAVE_HDR_SIZE;
> +
> +	/* XSAVE feature requires 64-byte alignment. */
> +	size += 64;

Does this actually work? ;)

Take a look at your system when it boots. You should see some helpful
pr_info()'s:

> [    0.137276] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
> [    0.138799] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
> [    0.139681] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
> [    0.140576] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
> [    0.141569] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
> [    0.142804] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
> [    0.143665] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
> [    0.144436] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
> [    0.145290] x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
> [    0.146238] x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
> [    0.146803] x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
> [    0.147397] x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]:    8
> [    0.147986] x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format.

Notice that we're talking about a buffer which is ~2k in size when
AVX-512 is in play. Is 'size' above that big?

> +	for_each_possible_cpu(cpu) {
> +		per_cpu(ext_regs_buf, cpu) = kzalloc_node(size, GFP_KERNEL,
> +							  cpu_to_node(cpu));
> +		if (!per_cpu(ext_regs_buf, cpu))
> +			goto err;
> +	}

Right now, any kmalloc() >=256b is going to be rounded up and aligned to
a power of 2 and thus also be 64b aligned although this is just an
implementation detail today. There's a _guarantee_ that all kmalloc()'s
with powers of 2 are naturally aligned and also 64b aligned.

In other words, in practice, these kzalloc_node() are 64b aligned and
rounded up to a power of 2 size.

You can *guarantee* they'll be 64b aligned by just rounding size up to
the next power of 2. This won't increase the size because they're
already being rounded up internally.

I can also grumble a little bit because this reinvents the wheel, and I
suspect it'll continue reinventing the wheel when it actually sizes the
buffer correctly.

We already have code in the kernel to dynamically allocate an fpstate:
fpstate_realloc(). It uses vmalloc() which wouldn't be my first choice
for this, but I also don't think it will hurt much. Looking at it, I'm
not sure how much of it you want to refactor and reuse, but you should
at least take a look.

There's also xstate_calculate_size(). That, you _definitely_ want to use
if you end up doing your own allocations.

  reply	other threads:[~2025-06-13 15:15 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-13 13:49 [RFC PATCH 00/12] Support vector and more extended registers in perf kan.liang
2025-06-13 13:49 ` [RFC PATCH 01/12] perf/x86: Use x86_perf_regs in the x86 nmi handler kan.liang
2025-06-13 13:49 ` [RFC PATCH 02/12] perf/x86: Setup the regs data kan.liang
2025-06-13 13:49 ` [RFC PATCH 03/12] x86/fpu/xstate: Add xsaves_nmi kan.liang
2025-06-13 14:39   ` Dave Hansen
2025-06-13 14:54     ` Liang, Kan
2025-06-13 15:19       ` Dave Hansen
2025-06-13 13:49 ` [RFC PATCH 04/12] perf: Move has_extended_regs() to header file kan.liang
2025-06-13 13:49 ` [RFC PATCH 05/12] perf/x86: Support XMM register for non-PEBS and REGS_USER kan.liang
2025-06-13 15:15   ` Dave Hansen [this message]
2025-06-13 17:51     ` Liang, Kan
2025-06-13 15:34   ` Dave Hansen
2025-06-13 18:14     ` Liang, Kan
2025-06-13 13:49 ` [RFC PATCH 06/12] perf: Support extension of sample_regs kan.liang
2025-06-17  8:00   ` Mi, Dapeng
2025-06-17  8:14   ` Peter Zijlstra
2025-06-17  9:49     ` Mi, Dapeng
2025-06-17 10:28       ` Peter Zijlstra
2025-06-17 12:14         ` Mi, Dapeng
2025-06-17 13:33           ` Peter Zijlstra
2025-06-17 14:06             ` Peter Zijlstra
2025-06-17 14:24               ` Mark Rutland
2025-06-17 14:44                 ` Peter Zijlstra
2025-06-17 14:55                   ` Mark Rutland
2025-06-17 19:00                     ` Mark Brown
2025-06-17 20:32                     ` Liang, Kan
2025-06-18  9:35                       ` Peter Zijlstra
2025-06-18 10:10                         ` Liang, Kan
2025-06-18 13:30                           ` Peter Zijlstra
2025-06-18 13:52                             ` Liang, Kan
2025-06-18 14:30                               ` Dave Hansen
2025-06-18 14:47                                 ` Dave Hansen
2025-06-18 15:24                                   ` Liang, Kan
2025-06-18 14:45                               ` Peter Zijlstra
2025-06-18 15:22                                 ` Liang, Kan
2025-06-13 13:49 ` [RFC PATCH 07/12] perf/x86: Add YMMH in extended regs kan.liang
2025-06-13 15:48   ` Dave Hansen
2025-06-13 13:49 ` [RFC PATCH 08/12] perf/x86: Add APX " kan.liang
2025-06-13 16:02   ` Dave Hansen
2025-06-13 17:17     ` Liang, Kan
2025-06-17  8:19   ` Peter Zijlstra
2025-06-13 13:49 ` [RFC PATCH 09/12] perf/x86: Add OPMASK " kan.liang
2025-06-13 13:49 ` [RFC PATCH 10/12] perf/x86: Add ZMM " kan.liang
2025-06-13 13:49 ` [RFC PATCH 11/12] perf/x86: Add SSP " kan.liang
2025-06-13 13:49 ` [RFC PATCH 12/12] perf/x86/intel: Support extended registers kan.liang
2025-06-17  7:50 ` [RFC PATCH 00/12] Support vector and more extended registers in perf Mi, Dapeng
2025-06-17  8:24 ` Peter Zijlstra
2025-06-17 13:52   ` Liang, Kan
2025-06-17 14:29     ` Peter Zijlstra
2025-06-17 15:23       ` Liang, Kan
2025-06-17 17:34         ` Peter Zijlstra
2025-06-18  0:57         ` Mi, Dapeng
2025-06-18 10:47           ` Liang, Kan
2025-06-18 12:28             ` Mi, Dapeng
2025-06-18 13:15               ` Liang, Kan
2025-06-19  0:41                 ` Mi, Dapeng
2025-06-19 11:11                   ` Liang, Kan
2025-06-19 12:26                     ` Mi, Dapeng
2025-06-19 13:38                     ` Peter Zijlstra
2025-06-19 14:27                       ` Liang, Kan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=368e7626-c9bd-47be-bb42-f542dc3d67b7@intel.com \
    --to=dave.hansen@intel.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=dapeng1.mi@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=zide.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).