From: Fuad Tabba <tabba@google.com>
To: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Cc: tabba@google.com, catalin.marinas@arm.com, will@kernel.org,
maz@kernel.org, oupton@kernel.org, qperret@google.com,
suzuki.poulose@arm.com, joey.gouly@arm.com,
yuzenghui@huawei.com
Subject: [PATCH 5/6] KVM: arm64: Fix pin leak and publication ordering in __pkvm_init_vcpu()
Date: Fri, 24 Apr 2026 09:49:07 +0100 [thread overview]
Message-ID: <20260424084908.370776-6-tabba@google.com> (raw)
In-Reply-To: <20260424084908.370776-1-tabba@google.com>
Two bugs exist in the vCPU initialisation path:
1. If a check fails after hyp_pin_shared_mem() succeeds, the cleanup
path jumps to 'unlock' without calling unpin_host_vcpu() or
unpin_host_sve_state(), permanently leaking pin references on the
host vCPU and SVE state pages.
Extract a register_hyp_vcpu() helper that performs the checks and
the store. When register_hyp_vcpu() returns an error, call
unpin_host_vcpu() and unpin_host_sve_state() inline before falling
through to the existing 'unlock' label.
2. register_hyp_vcpu() publishes the new vCPU pointer into
'hyp_vm->vcpus[]' with a bare store, allowing a concurrent caller
of pkvm_load_hyp_vcpu() to observe a partially initialised vCPU
object.
Ensure the store uses smp_store_release() and the load uses
smp_load_acquire(). While 'vm_table_lock' currently serialises the
store and the load, these barriers ensure the reader sees the fully
initialised 'hyp_vcpu' object even if there were a lockless path or
if the lock's own ordering guarantees were insufficient for nested
object initialization.
Fixes: 49af6ddb8e5c ("KVM: arm64: Add infrastructure to create and track pKVM instances at EL2")
Reported-by: Ben Simner <ben.simner@cl.cam.ac.uk>
Co-developed-by: Will Deacon <willdeacon@google.com>
Signed-off-by: Will Deacon <willdeacon@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 38 ++++++++++++++++++++++------------
1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 7ed96d64d611..e7496eb85628 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -266,7 +266,8 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
if (hyp_vm->kvm.created_vcpus <= vcpu_idx)
goto unlock;
- hyp_vcpu = hyp_vm->vcpus[vcpu_idx];
+ /* Pairs with smp_store_release() in register_hyp_vcpu(). */
+ hyp_vcpu = smp_load_acquire(&hyp_vm->vcpus[vcpu_idx]);
if (!hyp_vcpu)
goto unlock;
@@ -860,12 +861,30 @@ int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
* the page-aligned size of 'struct pkvm_hyp_vcpu'.
* Return 0 on success, negative error code on failure.
*/
+static int register_hyp_vcpu(struct pkvm_hyp_vm *hyp_vm,
+ struct pkvm_hyp_vcpu *hyp_vcpu)
+{
+ unsigned int idx = hyp_vcpu->vcpu.vcpu_idx;
+
+ if (idx >= hyp_vm->kvm.created_vcpus)
+ return -EINVAL;
+
+ if (hyp_vm->vcpus[idx])
+ return -EINVAL;
+
+ /*
+ * Ensure the hyp_vcpu is initialised before publishing it to
+ * the vCPU-load path via 'hyp_vm->vcpus[]'.
+ */
+ smp_store_release(&hyp_vm->vcpus[idx], hyp_vcpu);
+ return 0;
+}
+
int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
unsigned long vcpu_hva)
{
struct pkvm_hyp_vcpu *hyp_vcpu;
struct pkvm_hyp_vm *hyp_vm;
- unsigned int idx;
int ret;
hyp_vcpu = map_donated_memory(vcpu_hva, sizeof(*hyp_vcpu));
@@ -884,18 +903,11 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
if (ret)
goto unlock;
- idx = hyp_vcpu->vcpu.vcpu_idx;
- if (idx >= hyp_vm->kvm.created_vcpus) {
- ret = -EINVAL;
- goto unlock;
+ ret = register_hyp_vcpu(hyp_vm, hyp_vcpu);
+ if (ret) {
+ unpin_host_vcpu(host_vcpu);
+ unpin_host_sve_state(hyp_vcpu);
}
-
- if (hyp_vm->vcpus[idx]) {
- ret = -EINVAL;
- goto unlock;
- }
-
- hyp_vm->vcpus[idx] = hyp_vcpu;
unlock:
hyp_spin_unlock(&vm_table_lock);
--
2.54.0.rc2.544.gc7ae2d5bb8-goog
next prev parent reply other threads:[~2026-04-24 8:49 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-24 8:49 [PATCH 0/6] KVM: arm64: pKVM init and feature detection fixes Fuad Tabba
2026-04-24 8:49 ` [PATCH 1/6] KVM: arm64: Fix FEAT_Debugv8p9 to check DebugVer, not PMUVer Fuad Tabba
2026-04-24 8:49 ` [PATCH 2/6] KVM: arm64: Fix typo in feature check comments Fuad Tabba
2026-04-24 8:49 ` [PATCH 3/6] KVM: arm64: Fix FEAT_SPE_FnE to use PMSIDR_EL1.FnE, not PMSVer Fuad Tabba
2026-04-24 8:49 ` [PATCH 4/6] KVM: arm64: Fix kvm_vcpu_initialized() macro parameter Fuad Tabba
2026-04-24 8:49 ` Fuad Tabba [this message]
2026-04-24 8:49 ` [PATCH 6/6] KVM: arm64: Fix initialisation order in __pkvm_init_finalise() Fuad Tabba
2026-04-24 11:02 ` [PATCH 0/6] KVM: arm64: pKVM init and feature detection fixes Marc Zyngier
2026-04-24 11:08 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260424084908.370776-6-tabba@google.com \
--to=tabba@google.com \
--cc=catalin.marinas@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oupton@kernel.org \
--cc=qperret@google.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox