The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* Re: [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06
       [not found] ` <20260306-kvm-arm64-sme-v10-1-43f7683a0fb7@kernel.org>
@ 2026-05-08 17:12   ` Mark Rutland
  2026-05-09  0:43     ` Mark Brown
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Rutland @ 2026-05-08 17:12 UTC (permalink / raw)
  To: Mark Brown
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

On Fri, Mar 06, 2026 at 05:00:53PM +0000, Mark Brown wrote:
> Update the definition of SMIDR_EL1 in the sysreg definition to reflect the
> information in DD0601 2025-06. This includes somewhat more generic ways of
> describing the sharing of SMCUs, more information on supported priorities
> and provides additional resolution for describing affinity groups.

FWIW, these are all in ARM DDI 0487 M.b:

  https://developer.arm.com/documentation/ddi0487/mb/

Is anything later in the series going to depend on these fields, or
would everything behave correctly with the existing RES0 field
definitions?

> Reviewed-by: Fuad Tabba <tabba@google.com>
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/tools/sysreg | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 9d1c21108057..b6586accf344 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -3655,11 +3655,15 @@ Field	3:0	BS
>  EndSysreg
>  
>  Sysreg	SMIDR_EL1	3	1	0	0	6
> -Res0	63:32
> +Res0	63:60
> +Field	59:56	NSMC
> +Field	55:52	HIP

Reading the ARM ARM, HIP is arguably a backwards-incompatible change.

Do we expect to expose that to VMs, or just hide priorities entirely? I
suspect we probably want to require that the guest sees
SMIDR_EL1.SMPS==0, and not care about any of that.

Mark.

> +Field	51:32	AFFINITY2
>  Field	31:24	IMPLEMENTER
>  Field	23:16	REVISION
>  Field	15	SMPS
> -Res0	14:12
> +Field	14:13	SH
> +Res0	12
>  Field	11:0	AFFINITY
>  EndSysreg
>  
> 
> -- 
> 2.47.3
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 15/30] KVM: arm64: Support SME control registers
       [not found] ` <20260306-kvm-arm64-sme-v10-15-43f7683a0fb7@kernel.org>
@ 2026-05-08 17:20   ` Mark Rutland
  2026-05-11 14:17     ` Mark Brown
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Rutland @ 2026-05-08 17:20 UTC (permalink / raw)
  To: Mark Brown
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

On Fri, Mar 06, 2026 at 05:01:07PM +0000, Mark Brown wrote:
> SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add
> definitions and userspace access for them.  These control the SME vector
> length in a manner similar to that for SVE and also have feature enable
> bits for SME2 and FA64.  A subsequent patch will add management of them
> for guests as part of the general floating point context switch, as is
> done for the equivalent SVE registers.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_emulate.h  | 14 ++++++++++++
>  arch/arm64/include/asm/kvm_host.h     |  2 ++
>  arch/arm64/include/asm/vncr_mapping.h |  1 +
>  arch/arm64/kvm/sys_regs.c             | 42 ++++++++++++++++++++++++++++++++++-
>  4 files changed, 58 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 5bf3d7e1d92c..7a11dd7d554c 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -89,6 +89,14 @@ static inline void kvm_inject_nested_sve_trap(struct kvm_vcpu *vcpu)
>  	kvm_inject_nested_sync(vcpu, esr);
>  }
>  
> +static inline void kvm_inject_nested_sme_trap(struct kvm_vcpu *vcpu)
> +{
> +	u64 esr = FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_SME) |
> +		  ESR_ELx_IL;
> +
> +	kvm_inject_nested_sync(vcpu, esr);
> +}

This implicilty has the SMTC field as 0b000, which is correct for traps
of SMCR_EL{1,2} due to SMEN, but wouldn't be right for other traps (e.g.
traps of ZT0).

If we only use this for traps of SMCR_EL{1,2}, that's ok, but I think
it's worth a comment, and possibly a more specific name. Perhaps
kvm_inject_nested_sme_smen_trap() for now.

[...]

> +static bool access_smcr_el2(struct kvm_vcpu *vcpu,
> +			    struct sys_reg_params *p,
> +			    const struct sys_reg_desc *r)
> +{
> +	unsigned int vq;
> +	u64 smcr;
> +
> +	if (guest_hyp_sme_traps_enabled(vcpu)) {
> +		kvm_inject_nested_sme_trap(vcpu);
> +		return false;
> +	}
> +
> +	if (!p->is_write) {
> +		p->regval = __vcpu_sys_reg(vcpu, SMCR_EL2);
> +		return true;
> +	}
> +
> +	smcr = p->regval & ~SMCR_ELx_RES0;
> +	if (!vcpu_has_fa64(vcpu))
> +		smcr &= ~SMCR_ELx_FA64;
> +	if (!vcpu_has_sme2(vcpu))
> +		smcr &= ~SMCR_ELx_EZT0;
> +
> +	vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1;
> +	vq = min(vq, vcpu_sme_max_vq(vcpu));
> +	smcr &= ~SMCR_ELx_LEN_MASK;
> +	smcr |= SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1);

I'm not sure this sanitization is correct or necessary, and the same
concern applies to ZCR_ELx.LEN.

AFAICT, none of the values for the SMCR_ELx.LEN and ZCR_ELx.LEN fields
are reserved or unallocated. Thus all the bits of those fields should be
stateful, and a read should observe the last value written, regardless
of the effective value of the field.

That means that the following at EL2 or vEL2 shouldn't produce a
warning:
		
	int len_write, len_read;

	for (len_write = 0; len_write < 16; len_write++) {
		write_sysreg_s(len_write, SYS_SMCR_EL2);

		len_read = read_sysreg_s(SYS_SMCR_EL2) & SMCR_ELx_LEN_MASK;
		WARN_ON(len_read != len_write);
	}

Either what we're doing is wrong, or the architcture requires a
clarification to say that values corresponding to unimplmented vector
lengths are reserved.

If those bit are always stateful, the the logic to sanitize the LEN
field shouldn't live here, and that will need to happen when consuming
the effective value.

Mark.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06
  2026-05-08 17:12   ` [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06 Mark Rutland
@ 2026-05-09  0:43     ` Mark Brown
  2026-05-11 10:40       ` Mark Rutland
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Brown @ 2026-05-09  0:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

[-- Attachment #1: Type: text/plain, Size: 1162 bytes --]

On Fri, May 08, 2026 at 06:12:01PM +0100, Mark Rutland wrote:
> On Fri, Mar 06, 2026 at 05:00:53PM +0000, Mark Brown wrote:

> > Update the definition of SMIDR_EL1 in the sysreg definition to reflect the
> > information in DD0601 2025-06. This includes somewhat more generic ways of
> > describing the sharing of SMCUs, more information on supported priorities
> > and provides additional resolution for describing affinity groups.

> FWIW, these are all in ARM DDI 0487 M.b:

>   https://developer.arm.com/documentation/ddi0487/mb/

> Is anything later in the series going to depend on these fields, or
> would everything behave correctly with the existing RES0 field
> definitions?

We're exposing the affinity fields so there's a build time issue.

> > +Field	55:52	HIP

> Reading the ARM ARM, HIP is arguably a backwards-incompatible change.

Yes, I belive people are aware.

> Do we expect to expose that to VMs, or just hide priorities entirely? I
> suspect we probably want to require that the guest sees
> SMIDR_EL1.SMPS==0, and not care about any of that.

Currently we're not exposing priority support to guests so we don't need
to worry about it yet.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 04/30] arm64/fpsimd: Determine maximum virtualisable SME vector length
       [not found] ` <20260306-kvm-arm64-sme-v10-4-43f7683a0fb7@kernel.org>
@ 2026-05-11 10:32   ` Mark Rutland
  2026-05-11 12:42     ` Mark Brown
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Rutland @ 2026-05-11 10:32 UTC (permalink / raw)
  To: Mark Brown
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

On Fri, Mar 06, 2026 at 05:00:56PM +0000, Mark Brown wrote:
> As with SVE we can only virtualise SME vector lengths that are supported by
> all CPUs in the system, implement similar checks to those for SVE. 

So far so good.

> Since unlike SVE there are no specific vector lengths that are
> architecturally required the handling is subtly different, we report a
> system where this happens with a maximum vector length of
> SME_VQ_INVALID.

I think something went wrong during copyediting here.

A system where *what* happens?

> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/fpsimd.h |  2 ++
>  arch/arm64/kernel/fpsimd.c      | 21 ++++++++++++++++++++-
>  2 files changed, 22 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index e97729aa3b2f..0cd8a866e844 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -69,6 +69,8 @@ static inline void cpacr_restore(unsigned long cpacr)
>  #define ARCH_SVE_VQ_MAX ((ZCR_ELx_LEN_MASK >> ZCR_ELx_LEN_SHIFT) + 1)
>  #define SME_VQ_MAX	((SMCR_ELx_LEN_MASK >> SMCR_ELx_LEN_SHIFT) + 1)
>  
> +#define SME_VQ_INVALID	(SME_VQ_MAX + 1)

Does using (SME_VQ_MAX + 1) for this make something easier than if we
used 0?

My thinking is that 0 will be easier/clearer overall, since we can write
checks of the form:

	if (!info->max_virtualisable_vl) {
		/* SME is not virtualisable */
	}

... or:

	if (some_vl <= max_virtualisable_vl) {
		/* Check properties of a virtualisable VL */
	}

... and there's less scope for error.

> +
>  struct task_struct;
>  
>  extern void fpsimd_save_state(struct user_fpsimd_state *state);
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 2af0e0c5b9f4..49c050ef6db9 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -1218,7 +1218,8 @@ void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p)
>  void __init sme_setup(void)
>  {
>  	struct vl_info *info = &vl_info[ARM64_VEC_SME];
> -	int min_bit, max_bit;
> +	DECLARE_BITMAP(tmp_map, SVE_VQ_MAX);
> +	int min_bit, max_bit, b;
>  
>  	if (!system_supports_sme())
>  		return;
> @@ -1249,12 +1250,30 @@ void __init sme_setup(void)
>  	 */
>  	set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32));
>  
> +	bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map,
> +		      SVE_VQ_MAX);
> +
> +	b = find_last_bit(tmp_map, SVE_VQ_MAX);
> +	if (b >= SVE_VQ_MAX)
> +		/* All VLs virtualisable */
> +		info->max_virtualisable_vl = sve_vl_from_vq(ARCH_SVE_VQ_MAX);

I don't think this is right.

This test tells us that all VLs implemented by boot CPUs are
virtualisable. That set of VLs doesn't necessarily include the
architectural maximum VL.

IIUC that's not a problem for KVM, since KVM enforces that a guest's
maximum VL is an implemented VL. However, this is a problem for
vec_verify_vq_map().

Consider the case where all boot CPUs support only 128-bit, but later we
try to online a CPU that supports 128-bit and 256-bit. That CPU will be
rejected by vec_verify_vq_map().

Note that this isn't broken for SVE today as sve_setup() follows this up
with:

	if (info->max_virtualisable_vl > info->max_vl)
		info->max_virtualisable_vl = info->max_vl;

... but that won't be sufficient for streaming mode VLs given there's no
guarantee that smaller streaming VLs are implemented.

> +	else if (b == SVE_VQ_MAX - 1)
> +		/* No virtualisable VLs */
> +		info->max_virtualisable_vl = sve_vl_from_vq(SME_VQ_INVALID);

Similarly, I think this is broken for vec_verify_vq_map(). Consider a
case with two boot CPUs, where one boot CPU only supports 128-bit, and
the other boot cpu only supports 256-bit. If either CPU is hotplugged
out and then back in, it will be rejected by vec_verify_vq_map().

We don't have a similar problem for SVE since that's not architecturally
possible.

> +	else
> +		info->max_virtualisable_vl = sve_vl_from_vq(__bit_to_vq(b +  1));

This looks suspicious.

At this point we know that 'b' represents the smallest VL which is
partially supported. The next bit is almost never a power of two, is not
guaranteed to be an implemented VL, and is not guaranteed to be larger
than an implemented VL.

Imagine you have two CPUs:

* CPU x supports 128-bit.
* CPU y supports 128-bit and 512-bit.

The algorithm above will find 512-bit as the smallest partically
supported VL. For that, VQ==4 and b==12.

If max_virtualisable_vl is chosen as b+1, then that's b==13 and VQ==3,
which corresponds to a (not architecturally supported) 384-bit VL, which
is bigger than the architecturally-valid 256 bit VL that neither CPU
supports.

As with the other cases above, that's broken for vec_verify_vq_map(),
but I think KVM will gracefully handle this.

To solve all of the above, I think what we actually want to do is find
the largest uniformly implemented VL which is smaller than the smallest
partially implemented VL.

>  	pr_info("SME: minimum available vector length %u bytes per vector\n",
>  		info->min_vl);
>  	pr_info("SME: maximum available vector length %u bytes per vector\n",
>  		info->max_vl);
>  	pr_info("SME: default vector length %u bytes per vector\n",
>  		get_sme_default_vl());
> +
> +	/* KVM decides whether to support mismatched systems. Just warn here: */
> +	if (info->max_virtualisable_vl < info->max_vl ||
> +	    info->max_virtualisable_vl == sve_vl_from_vq(SME_VQ_INVALID))
> +		pr_warn("SME: unvirtualisable vector lengths present\n");

If we used 0 instead of (SME_VQ_MAX + 1), this would just be:

	if (info->max_virtualisable_vl < info->max_vl)
		pr_warn(...);

As above, I think using 0 would be preferable.

Mark.

>  }
>  
>  void sme_suspend_exit(void)
> 
> -- 
> 2.47.3
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06
  2026-05-09  0:43     ` Mark Brown
@ 2026-05-11 10:40       ` Mark Rutland
  2026-05-11 12:31         ` Mark Brown
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Rutland @ 2026-05-11 10:40 UTC (permalink / raw)
  To: Mark Brown
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

On Sat, May 09, 2026 at 09:43:11AM +0900, Mark Brown wrote:
> On Fri, May 08, 2026 at 06:12:01PM +0100, Mark Rutland wrote:
> > On Fri, Mar 06, 2026 at 05:00:53PM +0000, Mark Brown wrote:
> 
> > > Update the definition of SMIDR_EL1 in the sysreg definition to reflect the
> > > information in DD0601 2025-06. This includes somewhat more generic ways of
> > > describing the sharing of SMCUs, more information on supported priorities
> > > and provides additional resolution for describing affinity groups.
> 
> > FWIW, these are all in ARM DDI 0487 M.b:
> 
> >   https://developer.arm.com/documentation/ddi0487/mb/
> 
> > Is anything later in the series going to depend on these fields, or
> > would everything behave correctly with the existing RES0 field
> > definitions?
> 
> We're exposing the affinity fields so there's a build time issue.

What I'm asking is what is the rationale for updating these definitions?
e.g.

* Are we planning to use any of the fields in a specific way in the
  *host*?

* Are we planning to use any of the fields in a specific way in the
  *guest*?

* Is this updated just out of habit?

Knowing the rationale would help with review, even if that rationale is
just "it seemed nice to use the latest".

> > > +Field	55:52	HIP
> 
> > Reading the ARM ARM, HIP is arguably a backwards-incompatible change.
> 
> Yes, I belive people are aware.

Ok. Is that considered a problem, or accepted?

Which people are aware?

> > Do we expect to expose that to VMs, or just hide priorities entirely? I
> > suspect we probably want to require that the guest sees
> > SMIDR_EL1.SMPS==0, and not care about any of that.
> 
> Currently we're not exposing priority support to guests so we don't need
> to worry about it yet.

Do we plan to in future?

Mark.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06
  2026-05-11 10:40       ` Mark Rutland
@ 2026-05-11 12:31         ` Mark Brown
  0 siblings, 0 replies; 8+ messages in thread
From: Mark Brown @ 2026-05-11 12:31 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

[-- Attachment #1: Type: text/plain, Size: 1418 bytes --]

On Mon, May 11, 2026 at 11:40:02AM +0100, Mark Rutland wrote:
> On Sat, May 09, 2026 at 09:43:11AM +0900, Mark Brown wrote:

> > We're exposing the affinity fields so there's a build time issue.

> What I'm asking is what is the rationale for updating these definitions?
> e.g.

> * Are we planning to use any of the fields in a specific way in the
>   *host*?

> * Are we planning to use any of the fields in a specific way in the
>   *guest*?

> * Is this updated just out of habit?

> Knowing the rationale would help with review, even if that rationale is
> just "it seemed nice to use the latest".

The immediate motivation for including this in the current series is the
above.

> Which people are aware?

Probably a conversation best taken off list.

> > > Do we expect to expose that to VMs, or just hide priorities entirely? I
> > > suspect we probably want to require that the guest sees
> > > SMIDR_EL1.SMPS==0, and not care about any of that.

> > Currently we're not exposing priority support to guests so we don't need
> > to worry about it yet.

> Do we plan to in future?

The plan to evaluate the priority support that hardware implements in
the context of practical systems and consider if and how to expose it
for either hosts or guests, we need to ensure we've got a good
understanding of the system impacts and user needs.  Architecturally the
priority support is all very implementation defined.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 04/30] arm64/fpsimd: Determine maximum virtualisable SME vector length
  2026-05-11 10:32   ` [PATCH v10 04/30] arm64/fpsimd: Determine maximum virtualisable SME vector length Mark Rutland
@ 2026-05-11 12:42     ` Mark Brown
  0 siblings, 0 replies; 8+ messages in thread
From: Mark Brown @ 2026-05-11 12:42 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

[-- Attachment #1: Type: text/plain, Size: 680 bytes --]

On Mon, May 11, 2026 at 11:32:56AM +0100, Mark Rutland wrote:
> On Fri, Mar 06, 2026 at 05:00:56PM +0000, Mark Brown wrote:

> > +#define SME_VQ_INVALID	(SME_VQ_MAX + 1)

> Does using (SME_VQ_MAX + 1) for this make something easier than if we
> used 0?

There were checks for VLs less than $THING which were causing annoyance
IIRC but it should be workable since we shouldn't offer SME to guests if
the invalid VL comes up.  I'll look again when I'm back from holiday.

> To solve all of the above, I think what we actually want to do is find
> the largest uniformly implemented VL which is smaller than the smallest
> partially implemented VL.

Yes, that's what we're going for.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v10 15/30] KVM: arm64: Support SME control registers
  2026-05-08 17:20   ` [PATCH v10 15/30] KVM: arm64: Support SME control registers Mark Rutland
@ 2026-05-11 14:17     ` Mark Brown
  0 siblings, 0 replies; 8+ messages in thread
From: Mark Brown @ 2026-05-11 14:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Marc Zyngier, Joey Gouly, Catalin Marinas, Suzuki K Poulose,
	Will Deacon, Paolo Bonzini, Jonathan Corbet, Shuah Khan,
	Oliver Upton, Dave Martin, Fuad Tabba, Ben Horgan,
	linux-arm-kernel, kvmarm, linux-kernel, kvm, linux-doc,
	linux-kselftest, Peter Maydell, Eric Auger

[-- Attachment #1: Type: text/plain, Size: 3635 bytes --]

On Fri, May 08, 2026 at 06:20:27PM +0100, Mark Rutland wrote:

> > +static bool access_smcr_el2(struct kvm_vcpu *vcpu,
> > +			    struct sys_reg_params *p,
> > +			    const struct sys_reg_desc *r)
> > +{

> > +	vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1;
> > +	vq = min(vq, vcpu_sme_max_vq(vcpu));
> > +	smcr &= ~SMCR_ELx_LEN_MASK;
> > +	smcr |= SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1);

> I'm not sure this sanitization is correct or necessary, and the same
> concern applies to ZCR_ELx.LEN.

> AFAICT, none of the values for the SMCR_ELx.LEN and ZCR_ELx.LEN fields
> are reserved or unallocated. Thus all the bits of those fields should be
> stateful, and a read should observe the last value written, regardless
> of the effective value of the field.

...

> Either what we're doing is wrong, or the architcture requires a
> clarification to say that values corresponding to unimplmented vector
> lengths are reserved.

> If those bit are always stateful, the the logic to sanitize the LEN
> field shouldn't live here, and that will need to happen when consuming
> the effective value.

Your understanding of how these fields work matches mine, and writing
unimplemented values is part of the documented procedure for enumerating
the set of supported vector lengths.

As you note this is duplicated from the handling of ZCR_ELx, it's not
clear to me why the code does this but I figured there must be some good
reason for doing things this way that I just wasn't seeing and that it
was safer to fit in with the existing code.  The handling for vector
lengths in general and especially with NV was quite unclear,
particularly prior to your fixes in 59419f10045b (KVM: arm64: Eagerly
switch ZCR_EL{1,2}).

The changelog for b3d29a823099 ("KVM: arm64: nv: Handle ZCR_EL2 traps")
which introduced this for ZCR_ELx has a mention of mapping the requested
VL but it's not entirely clear to me what it means by that.  It does
mean that we can just load guest ZCR_EL2 and get the correct behaviour
for guest EL1 and EL0 when loading the guest state so perhaps that might
be all there is to it.

My expectation would have been to restrict the guest EL1/0 VL when we
load state into the registers as you allude to, something more like the
below (off the top of my head and completely untested, I'll pull this
into a proper patch later):

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 98b2976837b1..ddf8c2246139 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -501,11 +501,11 @@ static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu)
 		return;
 
 	if (vcpu_has_sve(vcpu)) {
+		zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
+
 		/* A guest hypervisor may restrict the effective max VL. */
-		if (is_nested_ctxt(vcpu))
-			zcr_el2 = __vcpu_sys_reg(vcpu, ZCR_EL2);
-		else
-			zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
+		if (is_nested_ctxt(vcpu) && !is_hyp_ctxt(vcpu))
+			zcr_el2 = min(zcr_el2, __vcpu_sys_reg(vcpu, ZCR_EL2));
 
 		write_sysreg_el2(zcr_el2, SYS_ZCR);
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 148fc3400ea8..b48f41acff82 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2874,9 +2874,7 @@ static bool access_zcr_el2(struct kvm_vcpu *vcpu,
 		return true;
 	}
 
-	vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1;
-	vq = min(vq, vcpu_sve_max_vq(vcpu));
-	__vcpu_assign_sys_reg(vcpu, ZCR_EL2, vq - 1);
+	__vcpu_assign_sys_reg(vcpu, ZCR_EL2, p->regval);
 	return true;
 }
 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-11 14:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260306-kvm-arm64-sme-v10-0-43f7683a0fb7@kernel.org>
     [not found] ` <20260306-kvm-arm64-sme-v10-1-43f7683a0fb7@kernel.org>
2026-05-08 17:12   ` [PATCH v10 01/30] arm64/sysreg: Update SMIDR_EL1 to DDI0601 2025-06 Mark Rutland
2026-05-09  0:43     ` Mark Brown
2026-05-11 10:40       ` Mark Rutland
2026-05-11 12:31         ` Mark Brown
     [not found] ` <20260306-kvm-arm64-sme-v10-15-43f7683a0fb7@kernel.org>
2026-05-08 17:20   ` [PATCH v10 15/30] KVM: arm64: Support SME control registers Mark Rutland
2026-05-11 14:17     ` Mark Brown
     [not found] ` <20260306-kvm-arm64-sme-v10-4-43f7683a0fb7@kernel.org>
2026-05-11 10:32   ` [PATCH v10 04/30] arm64/fpsimd: Determine maximum virtualisable SME vector length Mark Rutland
2026-05-11 12:42     ` Mark Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox