public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Fuad Tabba <tabba@google.com>
Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
	oliver.upton@linux.dev, joey.gouly@arm.com,
	suzuki.poulose@arm.com, yuzenghui@huawei.com,
	catalin.marinas@arm.com, will@kernel.org
Subject: Re: [PATCH v2 3/5] KVM: arm64: Refactor enter_exception64()
Date: Tue, 20 Jan 2026 14:57:18 +0000	[thread overview]
Message-ID: <864iogcoxd.wl-maz@kernel.org> (raw)
In-Reply-To: <20251211113828.370370-4-tabba@google.com>

On Thu, 11 Dec 2025 11:38:26 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> From: Quentin Perret <qperret@google.com>
> 
> To simplify the injection of exceptions into the host in pKVM context,
> refactor enter_exception64() to split out the logic for calculating the
> exception vector offset and the target CPSR.
> 
> Extract two new helper functions:
>  - get_except64_offset(): Calculates exception vector offset based on
>    current/target exception levels and exception type
>  - get_except64_cpsr(): Computes the new CPSR/PSTATE when taking an
>    exception
> 
> A subsequent patch will use these helpers to inject UNDEF exceptions
> into the host when MTE system registers are accessed with MTE disabled.
> Extracting the helpers allows that code to reuse the exception entry
> logic without duplicating the CPSR and vector offset calculations.
> 
> No functional change intended.
> 
> Signed-off-by: Quentin Perret <qperret@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h |   5 ++
>  arch/arm64/kvm/hyp/exception.c       | 100 ++++++++++++++++-----------
>  2 files changed, 63 insertions(+), 42 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index c9eab316398e..c3f04bd5b2a5 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -71,6 +71,11 @@ static inline int kvm_inject_serror(struct kvm_vcpu *vcpu)
>  	return kvm_inject_serror_esr(vcpu, ESR_ELx_ISV);
>  }
>  
> +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode,
> +				  enum exception_type type);
> +unsigned long get_except64_cpsr(unsigned long old, bool has_mte,
> +				unsigned long sctlr, unsigned long mode);

s/cpsr/pstate/ as we don't need to introduce more 32bit terminology.

> +
>  void kvm_vcpu_wfi(struct kvm_vcpu *vcpu);
>  
>  void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
> index bef40ddb16db..d3bcda665612 100644
> --- a/arch/arm64/kvm/hyp/exception.c
> +++ b/arch/arm64/kvm/hyp/exception.c
> @@ -65,12 +65,25 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
>  		vcpu->arch.ctxt.spsr_und = val;
>  }
>  
> +unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode,
> +				  enum exception_type type)
> +{
> +	u64 mode = psr & (PSR_MODE_MASK | PSR_MODE32_BIT);
> +	u64 exc_offset;
> +
> +	if      (mode == target_mode)
> +		exc_offset = CURRENT_EL_SP_ELx_VECTOR;
> +	else if ((mode | PSR_MODE_THREAD_BIT) == target_mode)
> +		exc_offset = CURRENT_EL_SP_EL0_VECTOR;
> +	else if (!(mode & PSR_MODE32_BIT))
> +		exc_offset = LOWER_EL_AArch64_VECTOR;
> +	else
> +		exc_offset = LOWER_EL_AArch32_VECTOR;
> +
> +	return exc_offset + type;
> +}
> +
>  /*
> - * This performs the exception entry at a given EL (@target_mode), stashing PC
> - * and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE.
> - * The EL passed to this function *must* be a non-secure, privileged mode with
> - * bit 0 being set (PSTATE.SP == 1).
> - *
>   * When an exception is taken, most PSTATE fields are left unchanged in the
>   * handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all
>   * of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx
> @@ -82,50 +95,17 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
>   * Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from
>   * MSB to LSB.
>   */
> -static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
> -			      enum exception_type type)
> +unsigned long get_except64_cpsr(unsigned long old, bool has_mte,
> +				unsigned long sctlr, unsigned long target_mode)

I really dislike the has_mte and sctlr thing.

The main reason is that it will not scale as we end-up hiding more
feature on the host (think PM and FEAT_EBEP, for example). Even worse,
some bits are not necessarily sourced from PSTATE, nor SCTLR (EXLOCK
depends on GCSCR_ELx, for example).

I think you really need to turn this into something that is flexible
enough that it can work for both host and guest, with an actual
abstraction. It is likely to look like a list of register accessors to
get to the correct data.

But it could well be that open-coding it is the least horrid solution.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


  reply	other threads:[~2026-01-20 14:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-11 11:38 [PATCH v2 0/5] KVM: arm64: Enforce MTE disablement at EL2 Fuad Tabba
2025-12-11 11:38 ` [PATCH v2 1/5] arm64: Remove dead code resetting HCR_EL2 for pKVM Fuad Tabba
2025-12-11 11:38 ` [PATCH v2 2/5] arm64: Clear HCR_EL2.ATA when MTE is not supported or disabled Fuad Tabba
2026-01-20 15:11   ` Marc Zyngier
2025-12-11 11:38 ` [PATCH v2 3/5] KVM: arm64: Refactor enter_exception64() Fuad Tabba
2026-01-20 14:57   ` Marc Zyngier [this message]
2025-12-11 11:38 ` [PATCH v2 4/5] arm64: Inject UNDEF when accessing MTE sysregs with MTE disabled Fuad Tabba
2025-12-11 11:38 ` [PATCH v2 5/5] KVM: arm64: Use kvm_has_mte() in pKVM trap initialization Fuad Tabba
2026-01-20  9:05 ` [PATCH v2 0/5] KVM: arm64: Enforce MTE disablement at EL2 Fuad Tabba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=864iogcoxd.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=oliver.upton@linux.dev \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox