* [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2
@ 2026-03-16 17:35 Fuad Tabba
2026-03-16 17:35 ` [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock Fuad Tabba
` (11 more replies)
0 siblings, 12 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Hi everyone,
I stumbled upon a lock leak while reviewing Sebastien's recent GIC Hardening
series [1] on an early error path. It reminded me that we've run into this exact
kind of bug pretty often while working on pKVM. Since we're going to be
hopefully upstream even more pKVM code soon, I'd like to try and fix this
pattern once and for all.
This series refactors a good chunk of KVM/arm64 codebase—covering both the host
at EL1 and the hypervisor at EL2—to use the guard() and scoped_guard() macros
from <linux/cleanup.h>.
By tying our locks to the scope:
- We can safely use early returns (like return -EINVAL;) without accidentally
leaking locks.
- We can get rid of a lot of tangled goto exit labels.
- Reviewing patches gets a lot easier since we don't have to mentally track lock
states across giant functions.
A common concern with these RAII-style macros is that they might bloat the
generated assembly. However, looking at the compiled code, it actually does the
opposite. As an example, looking at the EL2 object file
arch/arm64/kvm/hyp/nvhe/pkvm.nvhe.o before and after this series:
- Instruction count: Dropped by 39 lines of assembly.
- .text segment size: Shrank by 166 bytes.
This happens because the compiler no longer has to generate dedicated branch
targets for exit blocks, nor the stack frame unwinding jumps just to call
hyp_spin_unlock. It just efficiently inlines the unlock instruction right where
the return happens.
The series is split into two parts:
- Patches 1-5 (EL2): Adds the DEFINE_LOCK_GUARD_1 wrapper for hyp_spinlock and
cleans up the nvhe/ directory.
- Patches 6-10 (EL1): Cleans up manual mutex_lock and spin_lock calls across the
standard KVM host code (mmu.c, arm.c, psci.c, reset.c, pkvm.c).
There are definitely other parts of the KVM codebase that could benefit from
this (especially the vGIC), but I'm stopping here for now to see what everyone
thinks of the approach before touching anything else.
Based on Linux 7.0-rc3.
Cheers,
/fuad
[1] https://lore.kernel.org/all/CA+EHjTy051kjdReuESmk3HpMsBEW=s3DxhJXa8vQJFHz--wG2Q@mail.gmail.com/
To:
---
Fuad Tabba (10):
KVM: arm64: Add scoped resource management (guard) for hyp_spinlock
KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c
KVM: arm64: Use guard(hyp_spinlock) in ffa.c
KVM: arm64: Use guard(hyp_spinlock) in mm.c
KVM: arm64: Use guard(hyp_spinlock) in pkvm.c
KVM: arm64: Use guard(mutex) in mmu.c
KVM: arm64: Use scoped resource management in arm.c
KVM: arm64: Use guard(spinlock) in psci.c
KVM: arm64: Use guard(spinlock) in reset.c
KVM: arm64: Use guard(mutex) in pkvm.c
arch/arm64/kvm/arm.c | 53 +++++--------
arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 5 ++
arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++-----------
arch/arm64/kvm/hyp/nvhe/mm.c | 37 +++------
arch/arm64/kvm/hyp/nvhe/page_alloc.c | 13 +--
arch/arm64/kvm/hyp/nvhe/pkvm.c | 122 ++++++++++++-----------------
arch/arm64/kvm/mmu.c | 95 +++++++++-------------
arch/arm64/kvm/pkvm.c | 19 ++---
arch/arm64/kvm/psci.c | 19 ++---
arch/arm64/kvm/reset.c | 8 +-
10 files changed, 182 insertions(+), 275 deletions(-)
---
base-commit: 1f318b96cc84d7c2ab792fcc0bfd42a7ca890681
change-id: 20260316-tabba-el2_guard-2f7e3fa06f69
Best regards,
--
Fuad Tabba <tabba@google.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-16 17:35 ` [PATCH 02/10] KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c Fuad Tabba
` (10 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
The Linux kernel recently introduced scoped resource management macros
in <linux/cleanup.h>, enabling RAII-like patterns such as guard() and
scoped_guard(). These macros significantly reduce the risk of resource
leaks, deadlocks, and messy unwinding paths.
The arm64 KVM EL2 hypervisor heavily relies on its own locking
primitive, hyp_spinlock_t. Managing these locks carefully across
complex failure paths is critical, as missing an unlock at EL2
typically results in a lethal system-wide panic.
Add support for the guard(hyp_spinlock) and scoped_guard(hyp_spinlock)
macros by including <linux/cleanup.h> and using the DEFINE_LOCK_GUARD_1
infrastructure in the spinlock header. This paves the way for converting
error-prone manual locking into automatic scoped management.
Change-Id: Iba6d43c081b5fdf2496dc599fd6a781294493cb9
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
index 7c7ea8c55405..2681f8d2fde5 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h
@@ -16,6 +16,7 @@
#include <asm/alternative.h>
#include <asm/lse.h>
#include <asm/rwonce.h>
+#include <linux/cleanup.h>
typedef union hyp_spinlock {
u32 __val;
@@ -98,6 +99,10 @@ static inline void hyp_spin_unlock(hyp_spinlock_t *lock)
: "memory");
}
+DEFINE_LOCK_GUARD_1(hyp_spinlock, hyp_spinlock_t,
+ hyp_spin_lock(_T->lock),
+ hyp_spin_unlock(_T->lock))
+
static inline bool hyp_spin_is_locked(hyp_spinlock_t *lock)
{
hyp_spinlock_t lockval = READ_ONCE(*lock);
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 02/10] KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-16 17:35 ` [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-16 17:35 ` [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c Fuad Tabba
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate the manual hyp_spin_lock() and hyp_spin_unlock() calls in
hyp_put_page, hyp_get_page, and hyp_alloc_pages to use the new
guard(hyp_spinlock) macro.
This eliminates the need for manual unlock calls on return paths.
Specifically, in hyp_alloc_pages, this simplifies the early return
error path by removing the manual unlock and resolving single-line
curly brace styling, reducing the potential for future lock-leak
regressions.
Change-Id: I37bb8236dbfff9b58bda0937a78d2057036599b4
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/page_alloc.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c
index a1eb27a1a747..f43d8ad507e9 100644
--- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c
+++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c
@@ -167,18 +167,16 @@ void hyp_put_page(struct hyp_pool *pool, void *addr)
{
struct hyp_page *p = hyp_virt_to_page(addr);
- hyp_spin_lock(&pool->lock);
+ guard(hyp_spinlock)(&pool->lock);
__hyp_put_page(pool, p);
- hyp_spin_unlock(&pool->lock);
}
void hyp_get_page(struct hyp_pool *pool, void *addr)
{
struct hyp_page *p = hyp_virt_to_page(addr);
- hyp_spin_lock(&pool->lock);
+ guard(hyp_spinlock)(&pool->lock);
hyp_page_ref_inc(p);
- hyp_spin_unlock(&pool->lock);
}
void hyp_split_page(struct hyp_page *p)
@@ -200,22 +198,19 @@ void *hyp_alloc_pages(struct hyp_pool *pool, u8 order)
struct hyp_page *p;
u8 i = order;
- hyp_spin_lock(&pool->lock);
+ guard(hyp_spinlock)(&pool->lock);
/* Look for a high-enough-order page */
while (i <= pool->max_order && list_empty(&pool->free_area[i]))
i++;
- if (i > pool->max_order) {
- hyp_spin_unlock(&pool->lock);
+ if (i > pool->max_order)
return NULL;
- }
/* Extract it from the tree at the right order */
p = node_to_page(pool->free_area[i].next);
p = __hyp_extract_page(pool, p, order);
hyp_set_page_refcounted(p);
- hyp_spin_unlock(&pool->lock);
return hyp_page_to_virt(p);
}
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-16 17:35 ` [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock Fuad Tabba
2026-03-16 17:35 ` [PATCH 02/10] KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-17 17:40 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c Fuad Tabba
` (8 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
host_buffers.lock and version_lock to use the guard(hyp_spinlock)
macro.
This eliminates manual unlock calls on return paths and simplifies
error handling by replacing goto labels with direct returns.
Change-Id: I52e31c0bed3d2772c800a535af8abdabd81a178b
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++++++++++++------------------------
1 file changed, 38 insertions(+), 48 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
index 94161ea1cd60..0c772501c3ba 100644
--- a/arch/arm64/kvm/hyp/nvhe/ffa.c
+++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
@@ -239,6 +239,8 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
int ret = 0;
void *rx_virt, *tx_virt;
+ guard(hyp_spinlock)(&host_buffers.lock);
+
if (npages != (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) {
ret = FFA_RET_INVALID_PARAMETERS;
goto out;
@@ -249,10 +251,9 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
goto out;
}
- hyp_spin_lock(&host_buffers.lock);
if (host_buffers.tx) {
ret = FFA_RET_DENIED;
- goto out_unlock;
+ goto out;
}
/*
@@ -261,7 +262,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
*/
ret = ffa_map_hyp_buffers(npages);
if (ret)
- goto out_unlock;
+ goto out;
ret = __pkvm_host_share_hyp(hyp_phys_to_pfn(tx));
if (ret) {
@@ -292,8 +293,6 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
host_buffers.tx = tx_virt;
host_buffers.rx = rx_virt;
-out_unlock:
- hyp_spin_unlock(&host_buffers.lock);
out:
ffa_to_smccc_res(res, ret);
return;
@@ -306,7 +305,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
__pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx));
err_unmap:
ffa_unmap_hyp_buffers();
- goto out_unlock;
+ goto out;
}
static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs *res,
@@ -315,15 +314,16 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs *res,
DECLARE_REG(u32, id, ctxt, 1);
int ret = 0;
+ guard(hyp_spinlock)(&host_buffers.lock);
+
if (id != HOST_FFA_ID) {
ret = FFA_RET_INVALID_PARAMETERS;
goto out;
}
- hyp_spin_lock(&host_buffers.lock);
if (!host_buffers.tx) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
hyp_unpin_shared_mem(host_buffers.tx, host_buffers.tx + 1);
@@ -336,8 +336,6 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs *res,
ffa_unmap_hyp_buffers();
-out_unlock:
- hyp_spin_unlock(&host_buffers.lock);
out:
ffa_to_smccc_res(res, ret);
}
@@ -421,15 +419,16 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_1_2_regs *res,
int ret = FFA_RET_INVALID_PARAMETERS;
u32 nr_ranges;
+ guard(hyp_spinlock)(&host_buffers.lock);
+
if (fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE)
goto out;
if (fraglen % sizeof(*buf))
goto out;
- hyp_spin_lock(&host_buffers.lock);
if (!host_buffers.tx)
- goto out_unlock;
+ goto out;
buf = hyp_buffers.tx;
memcpy(buf, host_buffers.tx, fraglen);
@@ -444,15 +443,13 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_1_2_regs *res,
*/
ffa_mem_reclaim(res, handle_lo, handle_hi, 0);
WARN_ON(res->a0 != FFA_SUCCESS);
- goto out_unlock;
+ goto out;
}
ffa_mem_frag_tx(res, handle_lo, handle_hi, fraglen, endpoint_id);
if (res->a0 != FFA_SUCCESS && res->a0 != FFA_MEM_FRAG_RX)
WARN_ON(ffa_host_unshare_ranges(buf, nr_ranges));
-out_unlock:
- hyp_spin_unlock(&host_buffers.lock);
out:
if (ret)
ffa_to_smccc_res(res, ret);
@@ -482,6 +479,8 @@ static void __do_ffa_mem_xfer(const u64 func_id,
u32 offset, nr_ranges, checked_offset;
int ret = 0;
+ guard(hyp_spinlock)(&host_buffers.lock);
+
if (addr_mbz || npages_mbz || fraglen > len ||
fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
ret = FFA_RET_INVALID_PARAMETERS;
@@ -494,15 +493,14 @@ static void __do_ffa_mem_xfer(const u64 func_id,
goto out;
}
- hyp_spin_lock(&host_buffers.lock);
if (!host_buffers.tx) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
if (len > ffa_desc_buf.len) {
ret = FFA_RET_NO_MEMORY;
- goto out_unlock;
+ goto out;
}
buf = hyp_buffers.tx;
@@ -513,30 +511,30 @@ static void __do_ffa_mem_xfer(const u64 func_id,
offset = ep_mem_access->composite_off;
if (!offset || buf->ep_count != 1 || buf->sender_id != HOST_FFA_ID) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
if (check_add_overflow(offset, sizeof(struct ffa_composite_mem_region), &checked_offset)) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
if (fraglen < checked_offset) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
reg = (void *)buf + offset;
nr_ranges = ((void *)buf + fraglen) - (void *)reg->constituents;
if (nr_ranges % sizeof(reg->constituents[0])) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
nr_ranges /= sizeof(reg->constituents[0]);
ret = ffa_host_share_ranges(reg->constituents, nr_ranges);
if (ret)
- goto out_unlock;
+ goto out;
ffa_mem_xfer(res, func_id, len, fraglen);
if (fraglen != len) {
@@ -549,8 +547,6 @@ static void __do_ffa_mem_xfer(const u64 func_id,
goto err_unshare;
}
-out_unlock:
- hyp_spin_unlock(&host_buffers.lock);
out:
if (ret)
ffa_to_smccc_res(res, ret);
@@ -558,7 +554,7 @@ static void __do_ffa_mem_xfer(const u64 func_id,
err_unshare:
WARN_ON(ffa_host_unshare_ranges(reg->constituents, nr_ranges));
- goto out_unlock;
+ goto out;
}
#define do_ffa_mem_xfer(fid, res, ctxt) \
@@ -583,7 +579,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_regs *res,
handle = PACK_HANDLE(handle_lo, handle_hi);
- hyp_spin_lock(&host_buffers.lock);
+ guard(hyp_spinlock)(&host_buffers.lock);
buf = hyp_buffers.tx;
*buf = (struct ffa_mem_region) {
@@ -594,7 +590,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_regs *res,
ffa_retrieve_req(res, sizeof(*buf));
buf = hyp_buffers.rx;
if (res->a0 != FFA_MEM_RETRIEVE_RESP)
- goto out_unlock;
+ goto out;
len = res->a1;
fraglen = res->a2;
@@ -611,13 +607,13 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_regs *res,
fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE)) {
ret = FFA_RET_ABORTED;
ffa_rx_release(res);
- goto out_unlock;
+ goto out;
}
if (len > ffa_desc_buf.len) {
ret = FFA_RET_NO_MEMORY;
ffa_rx_release(res);
- goto out_unlock;
+ goto out;
}
buf = ffa_desc_buf.buf;
@@ -628,7 +624,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_regs *res,
ffa_mem_frag_rx(res, handle_lo, handle_hi, fragoff);
if (res->a0 != FFA_MEM_FRAG_TX) {
ret = FFA_RET_INVALID_PARAMETERS;
- goto out_unlock;
+ goto out;
}
fraglen = res->a3;
@@ -638,15 +634,13 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_regs *res,
ffa_mem_reclaim(res, handle_lo, handle_hi, flags);
if (res->a0 != FFA_SUCCESS)
- goto out_unlock;
+ goto out;
reg = (void *)buf + offset;
/* If the SPMD was happy, then we should be too. */
WARN_ON(ffa_host_unshare_ranges(reg->constituents,
reg->addr_range_cnt));
-out_unlock:
- hyp_spin_unlock(&host_buffers.lock);
-
+out:
if (ret)
ffa_to_smccc_res(res, ret);
}
@@ -774,13 +768,13 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *res,
return;
}
- hyp_spin_lock(&version_lock);
+ guard(hyp_spinlock)(&version_lock);
if (has_version_negotiated) {
if (FFA_MINOR_VERSION(ffa_req_version) < FFA_MINOR_VERSION(hyp_ffa_version))
res->a0 = FFA_RET_NOT_SUPPORTED;
else
res->a0 = hyp_ffa_version;
- goto unlock;
+ return;
}
/*
@@ -793,7 +787,7 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *res,
.a1 = ffa_req_version,
}, res);
if ((s32)res->a0 == FFA_RET_NOT_SUPPORTED)
- goto unlock;
+ return;
hyp_ffa_version = ffa_req_version;
}
@@ -804,8 +798,6 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *res,
smp_store_release(&has_version_negotiated, true);
res->a0 = hyp_ffa_version;
}
-unlock:
- hyp_spin_unlock(&version_lock);
}
static void do_ffa_part_get(struct arm_smccc_1_2_regs *res,
@@ -818,10 +810,10 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs *res,
DECLARE_REG(u32, flags, ctxt, 5);
u32 count, partition_sz, copy_sz;
- hyp_spin_lock(&host_buffers.lock);
+ guard(hyp_spinlock)(&host_buffers.lock);
if (!host_buffers.rx) {
ffa_to_smccc_res(res, FFA_RET_BUSY);
- goto out_unlock;
+ return;
}
arm_smccc_1_2_smc(&(struct arm_smccc_1_2_regs) {
@@ -834,16 +826,16 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs *res,
}, res);
if (res->a0 != FFA_SUCCESS)
- goto out_unlock;
+ return;
count = res->a2;
if (!count)
- goto out_unlock;
+ return;
if (hyp_ffa_version > FFA_VERSION_1_0) {
/* Get the number of partitions deployed in the system */
if (flags & 0x1)
- goto out_unlock;
+ return;
partition_sz = res->a3;
} else {
@@ -854,12 +846,10 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs *res,
copy_sz = partition_sz * count;
if (copy_sz > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
ffa_to_smccc_res(res, FFA_RET_ABORTED);
- goto out_unlock;
+ return;
}
memcpy(host_buffers.rx, hyp_buffers.rx, copy_sz);
-out_unlock:
- hyp_spin_unlock(&host_buffers.lock);
}
bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (2 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-17 17:44 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c Fuad Tabba
` (7 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
pkvm_pgd_lock to use the guard(hyp_spinlock) macro.
This eliminates manual unlock calls on return paths and simplifies
error handling by replacing goto labels with direct returns.
Note: hyp_fixblock_lock spans across hyp_fixblock_map/unmap functions,
so it retains explicit lock/unlock semantics to avoid RAII violations.
Change-Id: I6bb3f4105e95480269e5bf8289d084c8f9981730
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/mm.c | 37 ++++++++++---------------------------
1 file changed, 10 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c
index 218976287d3f..7a15c9fc15e5 100644
--- a/arch/arm64/kvm/hyp/nvhe/mm.c
+++ b/arch/arm64/kvm/hyp/nvhe/mm.c
@@ -35,13 +35,8 @@ static DEFINE_PER_CPU(struct hyp_fixmap_slot, fixmap_slots);
static int __pkvm_create_mappings(unsigned long start, unsigned long size,
unsigned long phys, enum kvm_pgtable_prot prot)
{
- int err;
-
- hyp_spin_lock(&pkvm_pgd_lock);
- err = kvm_pgtable_hyp_map(&pkvm_pgtable, start, size, phys, prot);
- hyp_spin_unlock(&pkvm_pgd_lock);
-
- return err;
+ guard(hyp_spinlock)(&pkvm_pgd_lock);
+ return kvm_pgtable_hyp_map(&pkvm_pgtable, start, size, phys, prot);
}
static int __pkvm_alloc_private_va_range(unsigned long start, size_t size)
@@ -80,10 +75,9 @@ int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr)
unsigned long addr;
int ret;
- hyp_spin_lock(&pkvm_pgd_lock);
+ guard(hyp_spinlock)(&pkvm_pgd_lock);
addr = __io_map_base;
ret = __pkvm_alloc_private_va_range(addr, size);
- hyp_spin_unlock(&pkvm_pgd_lock);
*haddr = addr;
@@ -137,13 +131,8 @@ int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot
int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot)
{
- int ret;
-
- hyp_spin_lock(&pkvm_pgd_lock);
- ret = pkvm_create_mappings_locked(from, to, prot);
- hyp_spin_unlock(&pkvm_pgd_lock);
-
- return ret;
+ guard(hyp_spinlock)(&pkvm_pgd_lock);
+ return pkvm_create_mappings_locked(from, to, prot);
}
int hyp_back_vmemmap(phys_addr_t back)
@@ -340,22 +329,17 @@ static int create_fixblock(void)
if (i >= hyp_memblock_nr)
return -EINVAL;
- hyp_spin_lock(&pkvm_pgd_lock);
+ guard(hyp_spinlock)(&pkvm_pgd_lock);
addr = ALIGN(__io_map_base, PMD_SIZE);
ret = __pkvm_alloc_private_va_range(addr, PMD_SIZE);
if (ret)
- goto unlock;
+ return ret;
ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP);
if (ret)
- goto unlock;
+ return ret;
- ret = kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker);
-
-unlock:
- hyp_spin_unlock(&pkvm_pgd_lock);
-
- return ret;
+ return kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker);
#else
return 0;
#endif
@@ -437,7 +421,7 @@ int pkvm_create_stack(phys_addr_t phys, unsigned long *haddr)
size_t size;
int ret;
- hyp_spin_lock(&pkvm_pgd_lock);
+ guard(hyp_spinlock)(&pkvm_pgd_lock);
prev_base = __io_map_base;
/*
@@ -463,7 +447,6 @@ int pkvm_create_stack(phys_addr_t phys, unsigned long *haddr)
if (ret)
__io_map_base = prev_base;
}
- hyp_spin_unlock(&pkvm_pgd_lock);
*haddr = addr + size;
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (3 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-17 17:47 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c Fuad Tabba
` (6 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
the global vm_table_lock to use the guard(hyp_spinlock) macro.
This significantly cleans up validation and error paths during VM
creation and manipulation by eliminating the need for goto labels
and manual unlock procedures on early returns.
Change-Id: I894df69b3cfe053a77dd660dfb70c95640c6d70c
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 122 +++++++++++++++++------------------------
1 file changed, 51 insertions(+), 71 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 2f029bfe4755..8f901fdead89 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -253,28 +253,23 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
if (__this_cpu_read(loaded_hyp_vcpu))
return NULL;
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
hyp_vm = get_vm_by_handle(handle);
if (!hyp_vm || hyp_vm->kvm.created_vcpus <= vcpu_idx)
- goto unlock;
+ return NULL;
hyp_vcpu = hyp_vm->vcpus[vcpu_idx];
if (!hyp_vcpu)
- goto unlock;
+ return NULL;
/* Ensure vcpu isn't loaded on more than one cpu simultaneously. */
- if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) {
- hyp_vcpu = NULL;
- goto unlock;
- }
+ if (unlikely(hyp_vcpu->loaded_hyp_vcpu))
+ return NULL;
hyp_vcpu->loaded_hyp_vcpu = this_cpu_ptr(&loaded_hyp_vcpu);
hyp_page_ref_inc(hyp_virt_to_page(hyp_vm));
-unlock:
- hyp_spin_unlock(&vm_table_lock);
- if (hyp_vcpu)
- __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu);
+ __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu);
return hyp_vcpu;
}
@@ -282,11 +277,10 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu);
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
hyp_vcpu->loaded_hyp_vcpu = NULL;
__this_cpu_write(loaded_hyp_vcpu, NULL);
hyp_page_ref_dec(hyp_virt_to_page(hyp_vm));
- hyp_spin_unlock(&vm_table_lock);
}
struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void)
@@ -299,20 +293,18 @@ struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle)
{
struct pkvm_hyp_vm *hyp_vm;
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
hyp_vm = get_vm_by_handle(handle);
if (hyp_vm)
hyp_page_ref_inc(hyp_virt_to_page(hyp_vm));
- hyp_spin_unlock(&vm_table_lock);
return hyp_vm;
}
void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm)
{
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
hyp_page_ref_dec(hyp_virt_to_page(hyp_vm));
- hyp_spin_unlock(&vm_table_lock);
}
struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle)
@@ -613,9 +605,8 @@ static int insert_vm_table_entry(pkvm_handle_t handle,
{
int ret;
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
ret = __insert_vm_table_entry(handle, hyp_vm);
- hyp_spin_unlock(&vm_table_lock);
return ret;
}
@@ -692,9 +683,8 @@ int __pkvm_reserve_vm(void)
{
int ret;
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
ret = allocate_vm_table_entry();
- hyp_spin_unlock(&vm_table_lock);
if (ret < 0)
return ret;
@@ -713,10 +703,9 @@ void __pkvm_unreserve_vm(pkvm_handle_t handle)
if (unlikely(!vm_table))
return;
- hyp_spin_lock(&vm_table_lock);
+ guard(hyp_spinlock)(&vm_table_lock);
if (likely(idx < KVM_MAX_PVMS && vm_table[idx] == RESERVED_ENTRY))
remove_vm_table_entry(handle);
- hyp_spin_unlock(&vm_table_lock);
}
/*
@@ -815,35 +804,35 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
if (!hyp_vcpu)
return -ENOMEM;
- hyp_spin_lock(&vm_table_lock);
+ scoped_guard(hyp_spinlock, &vm_table_lock) {
+ hyp_vm = get_vm_by_handle(handle);
+ if (!hyp_vm) {
+ ret = -ENOENT;
+ goto err_unmap;
+ }
- hyp_vm = get_vm_by_handle(handle);
- if (!hyp_vm) {
- ret = -ENOENT;
- goto unlock;
+ ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
+ if (ret)
+ goto err_unmap;
+
+ idx = hyp_vcpu->vcpu.vcpu_idx;
+ if (idx >= hyp_vm->kvm.created_vcpus) {
+ ret = -EINVAL;
+ goto err_unmap;
+ }
+
+ if (hyp_vm->vcpus[idx]) {
+ ret = -EINVAL;
+ goto err_unmap;
+ }
+
+ hyp_vm->vcpus[idx] = hyp_vcpu;
+
+ return 0;
}
- ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
- if (ret)
- goto unlock;
-
- idx = hyp_vcpu->vcpu.vcpu_idx;
- if (idx >= hyp_vm->kvm.created_vcpus) {
- ret = -EINVAL;
- goto unlock;
- }
-
- if (hyp_vm->vcpus[idx]) {
- ret = -EINVAL;
- goto unlock;
- }
-
- hyp_vm->vcpus[idx] = hyp_vcpu;
-unlock:
- hyp_spin_unlock(&vm_table_lock);
-
- if (ret)
- unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
+err_unmap:
+ unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
return ret;
}
@@ -866,27 +855,22 @@ int __pkvm_teardown_vm(pkvm_handle_t handle)
struct kvm *host_kvm;
unsigned int idx;
size_t vm_size;
- int err;
- hyp_spin_lock(&vm_table_lock);
- hyp_vm = get_vm_by_handle(handle);
- if (!hyp_vm) {
- err = -ENOENT;
- goto err_unlock;
+ scoped_guard(hyp_spinlock, &vm_table_lock) {
+ hyp_vm = get_vm_by_handle(handle);
+ if (!hyp_vm)
+ return -ENOENT;
+
+ if (WARN_ON(hyp_page_count(hyp_vm)))
+ return -EBUSY;
+
+ host_kvm = hyp_vm->host_kvm;
+
+ /* Ensure the VMID is clean before it can be reallocated */
+ __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu);
+ remove_vm_table_entry(handle);
}
- if (WARN_ON(hyp_page_count(hyp_vm))) {
- err = -EBUSY;
- goto err_unlock;
- }
-
- host_kvm = hyp_vm->host_kvm;
-
- /* Ensure the VMID is clean before it can be reallocated */
- __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu);
- remove_vm_table_entry(handle);
- hyp_spin_unlock(&vm_table_lock);
-
/* Reclaim guest pages (including page-table pages) */
mc = &host_kvm->arch.pkvm.teardown_mc;
stage2_mc = &host_kvm->arch.pkvm.stage2_teardown_mc;
@@ -917,8 +901,4 @@ int __pkvm_teardown_vm(pkvm_handle_t handle)
teardown_donated_memory(mc, hyp_vm, vm_size);
hyp_unpin_shared_mem(host_kvm, host_kvm + 1);
return 0;
-
-err_unlock:
- hyp_spin_unlock(&vm_table_lock);
- return err;
}
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (4 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-17 17:50 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c Fuad Tabba
` (5 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual mutex_lock() and mutex_unlock() calls managing
kvm_hyp_pgd_mutex and hyp_shared_pfns_lock to use the
guard(mutex) macro.
This eliminates manual unlock calls on return paths and simplifies
error handling by replacing unlock goto labels with direct returns.
Centralized cleanup goto paths are preserved with manual unlocks
removed.
Change-Id: Ib0f33a474eb84f19da4de0858c77751bbe55dfbb
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/mmu.c | 95 ++++++++++++++++++++--------------------------------
1 file changed, 36 insertions(+), 59 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index ec2eee857208..05f1cf839c9e 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -388,13 +388,12 @@ static void stage2_flush_vm(struct kvm *kvm)
*/
void __init free_hyp_pgds(void)
{
- mutex_lock(&kvm_hyp_pgd_mutex);
+ guard(mutex)(&kvm_hyp_pgd_mutex);
if (hyp_pgtable) {
kvm_pgtable_hyp_destroy(hyp_pgtable);
kfree(hyp_pgtable);
hyp_pgtable = NULL;
}
- mutex_unlock(&kvm_hyp_pgd_mutex);
}
static bool kvm_host_owns_hyp_mappings(void)
@@ -421,16 +420,11 @@ static bool kvm_host_owns_hyp_mappings(void)
int __create_hyp_mappings(unsigned long start, unsigned long size,
unsigned long phys, enum kvm_pgtable_prot prot)
{
- int err;
-
if (WARN_ON(!kvm_host_owns_hyp_mappings()))
return -EINVAL;
- mutex_lock(&kvm_hyp_pgd_mutex);
- err = kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot);
- mutex_unlock(&kvm_hyp_pgd_mutex);
-
- return err;
+ guard(mutex)(&kvm_hyp_pgd_mutex);
+ return kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot);
}
static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
@@ -478,56 +472,42 @@ static int share_pfn_hyp(u64 pfn)
{
struct rb_node **node, *parent;
struct hyp_shared_pfn *this;
- int ret = 0;
- mutex_lock(&hyp_shared_pfns_lock);
+ guard(mutex)(&hyp_shared_pfns_lock);
this = find_shared_pfn(pfn, &node, &parent);
if (this) {
this->count++;
- goto unlock;
+ return 0;
}
this = kzalloc_obj(*this);
- if (!this) {
- ret = -ENOMEM;
- goto unlock;
- }
+ if (!this)
+ return -ENOMEM;
this->pfn = pfn;
this->count = 1;
rb_link_node(&this->node, parent, node);
rb_insert_color(&this->node, &hyp_shared_pfns);
- ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn);
-unlock:
- mutex_unlock(&hyp_shared_pfns_lock);
-
- return ret;
+ return kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn);
}
static int unshare_pfn_hyp(u64 pfn)
{
struct rb_node **node, *parent;
struct hyp_shared_pfn *this;
- int ret = 0;
- mutex_lock(&hyp_shared_pfns_lock);
+ guard(mutex)(&hyp_shared_pfns_lock);
this = find_shared_pfn(pfn, &node, &parent);
- if (WARN_ON(!this)) {
- ret = -ENOENT;
- goto unlock;
- }
+ if (WARN_ON(!this))
+ return -ENOENT;
this->count--;
if (this->count)
- goto unlock;
+ return 0;
rb_erase(&this->node, &hyp_shared_pfns);
kfree(this);
- ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn);
-unlock:
- mutex_unlock(&hyp_shared_pfns_lock);
-
- return ret;
+ return kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn);
}
int kvm_share_hyp(void *from, void *to)
@@ -652,22 +632,20 @@ int hyp_alloc_private_va_range(size_t size, unsigned long *haddr)
unsigned long base;
int ret = 0;
- mutex_lock(&kvm_hyp_pgd_mutex);
-
- /*
- * This assumes that we have enough space below the idmap
- * page to allocate our VAs. If not, the check in
- * __hyp_alloc_private_va_range() will kick. A potential
- * alternative would be to detect that overflow and switch
- * to an allocation above the idmap.
- *
- * The allocated size is always a multiple of PAGE_SIZE.
- */
- size = PAGE_ALIGN(size);
- base = io_map_base - size;
- ret = __hyp_alloc_private_va_range(base);
-
- mutex_unlock(&kvm_hyp_pgd_mutex);
+ scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
+ /*
+ * This assumes that we have enough space below the idmap
+ * page to allocate our VAs. If not, the check in
+ * __hyp_alloc_private_va_range() will kick. A potential
+ * alternative would be to detect that overflow and switch
+ * to an allocation above the idmap.
+ *
+ * The allocated size is always a multiple of PAGE_SIZE.
+ */
+ size = PAGE_ALIGN(size);
+ base = io_map_base - size;
+ ret = __hyp_alloc_private_va_range(base);
+ }
if (!ret)
*haddr = base;
@@ -711,17 +689,16 @@ int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr)
size_t size;
int ret;
- mutex_lock(&kvm_hyp_pgd_mutex);
- /*
- * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
- * an alignment of our allocation on the order of the size.
- */
- size = NVHE_STACK_SIZE * 2;
- base = ALIGN_DOWN(io_map_base - size, size);
+ scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
+ /*
+ * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
+ * an alignment of our allocation on the order of the size.
+ */
+ size = NVHE_STACK_SIZE * 2;
+ base = ALIGN_DOWN(io_map_base - size, size);
- ret = __hyp_alloc_private_va_range(base);
-
- mutex_unlock(&kvm_hyp_pgd_mutex);
+ ret = __hyp_alloc_private_va_range(base);
+ }
if (ret) {
kvm_err("Cannot allocate hyp stack guard page\n");
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (5 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-17 17:53 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c Fuad Tabba
` (4 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual spin_lock() calls managing mp_state_lock and manual
mutex_lock() calls managing kvm->arch.config_lock to use the
guard(spinlock) and guard(mutex) macros.
This eliminates manual unlock calls on early return paths and
simplifies the vCPU suspend/resume control flow.
Change-Id: Ifcd8455d08afa5d00fc200daaa3fb13f6736e6ed
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/arm.c | 53 ++++++++++++++++++++--------------------------------
1 file changed, 20 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 410ffd41fd73..017f5bfabe19 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -274,17 +274,15 @@ static void kvm_destroy_mpidr_data(struct kvm *kvm)
{
struct kvm_mpidr_data *data;
- mutex_lock(&kvm->arch.config_lock);
-
- data = rcu_dereference_protected(kvm->arch.mpidr_data,
- lockdep_is_held(&kvm->arch.config_lock));
- if (data) {
- rcu_assign_pointer(kvm->arch.mpidr_data, NULL);
- synchronize_rcu();
- kfree(data);
+ scoped_guard(mutex, &kvm->arch.config_lock) {
+ data = rcu_dereference_protected(kvm->arch.mpidr_data,
+ lockdep_is_held(&kvm->arch.config_lock));
+ if (data) {
+ rcu_assign_pointer(kvm->arch.mpidr_data, NULL);
+ synchronize_rcu();
+ kfree(data);
+ }
}
-
- mutex_unlock(&kvm->arch.config_lock);
}
/**
@@ -738,9 +736,8 @@ static void __kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
{
- spin_lock(&vcpu->arch.mp_state_lock);
+ guard(spinlock)(&vcpu->arch.mp_state_lock);
__kvm_arm_vcpu_power_off(vcpu);
- spin_unlock(&vcpu->arch.mp_state_lock);
}
bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu)
@@ -773,7 +770,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
{
int ret = 0;
- spin_lock(&vcpu->arch.mp_state_lock);
+ guard(spinlock)(&vcpu->arch.mp_state_lock);
switch (mp_state->mp_state) {
case KVM_MP_STATE_RUNNABLE:
@@ -789,8 +786,6 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
ret = -EINVAL;
}
- spin_unlock(&vcpu->arch.mp_state_lock);
-
return ret;
}
@@ -828,11 +823,11 @@ static void kvm_init_mpidr_data(struct kvm *kvm)
u64 aff_set = 0, aff_clr = ~0UL;
struct kvm_vcpu *vcpu;
- mutex_lock(&kvm->arch.config_lock);
+ guard(mutex)(&kvm->arch.config_lock);
if (rcu_access_pointer(kvm->arch.mpidr_data) ||
atomic_read(&kvm->online_vcpus) == 1)
- goto out;
+ return;
kvm_for_each_vcpu(c, vcpu, kvm) {
u64 aff = kvm_vcpu_get_mpidr_aff(vcpu);
@@ -857,7 +852,7 @@ static void kvm_init_mpidr_data(struct kvm *kvm)
GFP_KERNEL_ACCOUNT);
if (!data)
- goto out;
+ return;
data->mpidr_mask = mask;
@@ -869,8 +864,6 @@ static void kvm_init_mpidr_data(struct kvm *kvm)
}
rcu_assign_pointer(kvm->arch.mpidr_data, data);
-out:
- mutex_unlock(&kvm->arch.config_lock);
}
/*
@@ -944,9 +937,8 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
return ret;
}
- mutex_lock(&kvm->arch.config_lock);
+ guard(mutex)(&kvm->arch.config_lock);
set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
- mutex_unlock(&kvm->arch.config_lock);
return ret;
}
@@ -1585,29 +1577,26 @@ static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
{
unsigned long features = init->features[0];
struct kvm *kvm = vcpu->kvm;
- int ret = -EINVAL;
+ int ret;
- mutex_lock(&kvm->arch.config_lock);
+ guard(mutex)(&kvm->arch.config_lock);
if (test_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags) &&
kvm_vcpu_init_changed(vcpu, init))
- goto out_unlock;
+ return -EINVAL;
bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES);
ret = kvm_setup_vcpu(vcpu);
if (ret)
- goto out_unlock;
+ return ret;
/* Now we know what it is, we can reset it. */
kvm_reset_vcpu(vcpu);
set_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags);
vcpu_set_flag(vcpu, VCPU_INITIALIZED);
- ret = 0;
-out_unlock:
- mutex_unlock(&kvm->arch.config_lock);
- return ret;
+ return 0;
}
static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
@@ -1674,15 +1663,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
/*
* Handle the "start in power-off" case.
*/
- spin_lock(&vcpu->arch.mp_state_lock);
+ guard(spinlock)(&vcpu->arch.mp_state_lock);
if (power_off)
__kvm_arm_vcpu_power_off(vcpu);
else
WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
- spin_unlock(&vcpu->arch.mp_state_lock);
-
return 0;
}
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (6 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-17 17:54 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 09/10] KVM: arm64: Use guard(spinlock) in reset.c Fuad Tabba
` (3 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual spin_lock() and spin_unlock() calls managing
the vcpu->arch.mp_state_lock to use the guard(spinlock) macro.
This eliminates manual unlock calls on return paths and simplifies
error handling during PSCI calls by replacing unlock goto labels
with direct returns.
Change-Id: Iaf72da18b18aaec8edff91bc30379bed9dd04b2b
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/psci.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
index 3b5dbe9a0a0e..04801f8b596a 100644
--- a/arch/arm64/kvm/psci.c
+++ b/arch/arm64/kvm/psci.c
@@ -62,7 +62,6 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
struct vcpu_reset_state *reset_state;
struct kvm *kvm = source_vcpu->kvm;
struct kvm_vcpu *vcpu = NULL;
- int ret = PSCI_RET_SUCCESS;
unsigned long cpu_id;
cpu_id = smccc_get_arg1(source_vcpu);
@@ -78,14 +77,13 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
if (!vcpu)
return PSCI_RET_INVALID_PARAMS;
- spin_lock(&vcpu->arch.mp_state_lock);
+ guard(spinlock)(&vcpu->arch.mp_state_lock);
+
if (!kvm_arm_vcpu_stopped(vcpu)) {
if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
- ret = PSCI_RET_ALREADY_ON;
+ return PSCI_RET_ALREADY_ON;
else
- ret = PSCI_RET_INVALID_PARAMS;
-
- goto out_unlock;
+ return PSCI_RET_INVALID_PARAMS;
}
reset_state = &vcpu->arch.reset_state;
@@ -113,9 +111,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE);
kvm_vcpu_wake_up(vcpu);
-out_unlock:
- spin_unlock(&vcpu->arch.mp_state_lock);
- return ret;
+ return PSCI_RET_SUCCESS;
}
static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
@@ -176,9 +172,8 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type, u64 flags)
* re-initialized.
*/
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
- spin_lock(&tmp->arch.mp_state_lock);
- WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
- spin_unlock(&tmp->arch.mp_state_lock);
+ scoped_guard(spinlock, &tmp->arch.mp_state_lock)
+ WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
}
kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 09/10] KVM: arm64: Use guard(spinlock) in reset.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (7 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-16 17:35 ` [PATCH 10/10] KVM: arm64: Use guard(mutex) in pkvm.c Fuad Tabba
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual spin_lock() and spin_unlock() calls managing
the vcpu->arch.mp_state_lock to use the scoped_guard(spinlock)
macro.
This streamlines control flow during vCPU resets by utilizing
RAII-style automated unlocking.
Change-Id: I32e721e67012c4a141f46b220190bf3c28485821
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/reset.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 959532422d3a..e229c6885c10 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -193,10 +193,10 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
bool loaded;
u32 pstate;
- spin_lock(&vcpu->arch.mp_state_lock);
- reset_state = vcpu->arch.reset_state;
- vcpu->arch.reset_state.reset = false;
- spin_unlock(&vcpu->arch.mp_state_lock);
+ scoped_guard(spinlock, &vcpu->arch.mp_state_lock) {
+ reset_state = vcpu->arch.reset_state;
+ vcpu->arch.reset_state.reset = false;
+ }
preempt_disable();
loaded = (vcpu->cpu != -1);
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 10/10] KVM: arm64: Use guard(mutex) in pkvm.c
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (8 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 09/10] KVM: arm64: Use guard(spinlock) in reset.c Fuad Tabba
@ 2026-03-16 17:35 ` Fuad Tabba
2026-03-16 17:39 ` [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-17 8:20 ` Marc Zyngier
11 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:35 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Cc: Fuad Tabba
Migrate manual mutex_lock() and mutex_unlock() calls managing
kvm->arch.config_lock to use the guard(mutex) macro.
This eliminates manual unlock calls on early return paths, ensuring
the mutex is safely released during early pKVM host-side VM initialization.
Change-Id: I902ab100f2deb4de7d6fbf0340d4aec30cf49e56
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/pkvm.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
index d7a0f69a9982..4a4a9d0699c8 100644
--- a/arch/arm64/kvm/pkvm.c
+++ b/arch/arm64/kvm/pkvm.c
@@ -190,33 +190,28 @@ bool pkvm_hyp_vm_is_created(struct kvm *kvm)
int pkvm_create_hyp_vm(struct kvm *kvm)
{
- int ret = 0;
+ guard(mutex)(&kvm->arch.config_lock);
- mutex_lock(&kvm->arch.config_lock);
if (!pkvm_hyp_vm_is_created(kvm))
- ret = __pkvm_create_hyp_vm(kvm);
- mutex_unlock(&kvm->arch.config_lock);
+ return __pkvm_create_hyp_vm(kvm);
- return ret;
+ return 0;
}
int pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
{
- int ret = 0;
+ guard(mutex)(&vcpu->kvm->arch.config_lock);
- mutex_lock(&vcpu->kvm->arch.config_lock);
if (!vcpu_get_flag(vcpu, VCPU_PKVM_FINALIZED))
- ret = __pkvm_create_hyp_vcpu(vcpu);
- mutex_unlock(&vcpu->kvm->arch.config_lock);
+ return __pkvm_create_hyp_vcpu(vcpu);
- return ret;
+ return 0;
}
void pkvm_destroy_hyp_vm(struct kvm *kvm)
{
- mutex_lock(&kvm->arch.config_lock);
+ guard(mutex)(&kvm->arch.config_lock);
__pkvm_destroy_hyp_vm(kvm);
- mutex_unlock(&kvm->arch.config_lock);
}
int pkvm_init_host_vm(struct kvm *kvm)
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (9 preceding siblings ...)
2026-03-16 17:35 ` [PATCH 10/10] KVM: arm64: Use guard(mutex) in pkvm.c Fuad Tabba
@ 2026-03-16 17:39 ` Fuad Tabba
2026-03-17 8:20 ` Marc Zyngier
11 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-16 17:39 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Hi again,
On Mon, 16 Mar 2026 at 17:35, Fuad Tabba <tabba@google.com> wrote:
>
> Hi everyone,
This was my first time using b4, and I thought it would strip the
Change-id's, and this was supposed to be a "reflect" run, but
obviously something went wrong there :)
Other than that, this series is ready for review. Sorry for the Change-id noise!
Cheers,
/fuad
>
> I stumbled upon a lock leak while reviewing Sebastien's recent GIC Hardening
> series [1] on an early error path. It reminded me that we've run into this exact
> kind of bug pretty often while working on pKVM. Since we're going to be
> hopefully upstream even more pKVM code soon, I'd like to try and fix this
> pattern once and for all.
>
> This series refactors a good chunk of KVM/arm64 codebase—covering both the host
> at EL1 and the hypervisor at EL2—to use the guard() and scoped_guard() macros
> from <linux/cleanup.h>.
>
> By tying our locks to the scope:
> - We can safely use early returns (like return -EINVAL;) without accidentally
> leaking locks.
> - We can get rid of a lot of tangled goto exit labels.
> - Reviewing patches gets a lot easier since we don't have to mentally track lock
> states across giant functions.
>
> A common concern with these RAII-style macros is that they might bloat the
> generated assembly. However, looking at the compiled code, it actually does the
> opposite. As an example, looking at the EL2 object file
> arch/arm64/kvm/hyp/nvhe/pkvm.nvhe.o before and after this series:
> - Instruction count: Dropped by 39 lines of assembly.
> - .text segment size: Shrank by 166 bytes.
>
> This happens because the compiler no longer has to generate dedicated branch
> targets for exit blocks, nor the stack frame unwinding jumps just to call
> hyp_spin_unlock. It just efficiently inlines the unlock instruction right where
> the return happens.
>
> The series is split into two parts:
> - Patches 1-5 (EL2): Adds the DEFINE_LOCK_GUARD_1 wrapper for hyp_spinlock and
> cleans up the nvhe/ directory.
> - Patches 6-10 (EL1): Cleans up manual mutex_lock and spin_lock calls across the
> standard KVM host code (mmu.c, arm.c, psci.c, reset.c, pkvm.c).
>
> There are definitely other parts of the KVM codebase that could benefit from
> this (especially the vGIC), but I'm stopping here for now to see what everyone
> thinks of the approach before touching anything else.
>
> Based on Linux 7.0-rc3.
>
> Cheers,
> /fuad
>
> [1] https://lore.kernel.org/all/CA+EHjTy051kjdReuESmk3HpMsBEW=s3DxhJXa8vQJFHz--wG2Q@mail.gmail.com/
>
> To:
>
> ---
> Fuad Tabba (10):
> KVM: arm64: Add scoped resource management (guard) for hyp_spinlock
> KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c
> KVM: arm64: Use guard(hyp_spinlock) in ffa.c
> KVM: arm64: Use guard(hyp_spinlock) in mm.c
> KVM: arm64: Use guard(hyp_spinlock) in pkvm.c
> KVM: arm64: Use guard(mutex) in mmu.c
> KVM: arm64: Use scoped resource management in arm.c
> KVM: arm64: Use guard(spinlock) in psci.c
> KVM: arm64: Use guard(spinlock) in reset.c
> KVM: arm64: Use guard(mutex) in pkvm.c
>
> arch/arm64/kvm/arm.c | 53 +++++--------
> arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 5 ++
> arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++-----------
> arch/arm64/kvm/hyp/nvhe/mm.c | 37 +++------
> arch/arm64/kvm/hyp/nvhe/page_alloc.c | 13 +--
> arch/arm64/kvm/hyp/nvhe/pkvm.c | 122 ++++++++++++-----------------
> arch/arm64/kvm/mmu.c | 95 +++++++++-------------
> arch/arm64/kvm/pkvm.c | 19 ++---
> arch/arm64/kvm/psci.c | 19 ++---
> arch/arm64/kvm/reset.c | 8 +-
> 10 files changed, 182 insertions(+), 275 deletions(-)
> ---
> base-commit: 1f318b96cc84d7c2ab792fcc0bfd42a7ca890681
> change-id: 20260316-tabba-el2_guard-2f7e3fa06f69
>
> Best regards,
> --
> Fuad Tabba <tabba@google.com>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
` (10 preceding siblings ...)
2026-03-16 17:39 ` [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
@ 2026-03-17 8:20 ` Marc Zyngier
2026-03-17 9:06 ` Fuad Tabba
11 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2026-03-17 8:20 UTC (permalink / raw)
To: Fuad Tabba
Cc: Oliver Upton, Joey Gouly, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:21 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> Hi everyone,
>
> I stumbled upon a lock leak while reviewing Sebastien's recent GIC Hardening
> series [1] on an early error path. It reminded me that we've run into this exact
> kind of bug pretty often while working on pKVM. Since we're going to be
> hopefully upstream even more pKVM code soon, I'd like to try and fix this
> pattern once and for all.
I'm of the opposite opinion. I'd rather convert things as code gets
modified, because this otherwise makes backporting patches a real pain
(we have been through that exercise with the irq stack, and it wasn't
an enjoyable experience).
So while I'm not opposed to this in general, I'd rather see it as a
prefix for new features, instead of a standalone series.
[...]
> There are definitely other parts of the KVM codebase that could benefit from
> this (especially the vGIC), but I'm stopping here for now to see what everyone
> thinks of the approach before touching anything else.
The vgic is specially difficult to convert because of some of the
constructs that take a lock in a function and release it in another,
defeating the scope-based locking.
I'll have a look at the series anyway.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2
2026-03-17 8:20 ` Marc Zyngier
@ 2026-03-17 9:06 ` Fuad Tabba
0 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-17 9:06 UTC (permalink / raw)
To: Marc Zyngier
Cc: Oliver Upton, Joey Gouly, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Hi Marc,
On Tue, 17 Mar 2026 at 08:20, Marc Zyngier <maz@kernel.org> wrote:
>
> On Mon, 16 Mar 2026 17:35:21 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Hi everyone,
> >
> > I stumbled upon a lock leak while reviewing Sebastien's recent GIC Hardening
> > series [1] on an early error path. It reminded me that we've run into this exact
> > kind of bug pretty often while working on pKVM. Since we're going to be
> > hopefully upstream even more pKVM code soon, I'd like to try and fix this
> > pattern once and for all.
>
> I'm of the opposite opinion. I'd rather convert things as code gets
> modified, because this otherwise makes backporting patches a real pain
> (we have been through that exercise with the irq stack, and it wasn't
> an enjoyable experience).
>
> So while I'm not opposed to this in general, I'd rather see it as a
> prefix for new features, instead of a standalone series.
I'm happy to do whatever is easier for you. My thinking was that,
since the coming pKVM series (assuming Will's series goes in soon) is
going to be pretty long, it would be easier to have this done earlier.
>
> [...]
>
> > There are definitely other parts of the KVM codebase that could benefit from
> > this (especially the vGIC), but I'm stopping here for now to see what everyone
> > thinks of the approach before touching anything else.
>
> The vgic is specially difficult to convert because of some of the
> constructs that take a lock in a function and release it in another,
> defeating the scope-based locking.
>
> I'll have a look at the series anyway.
Thanks! In case you think this is a good idea, let me know and I'll
respin without those "Change-ids".
Cheers,
/fuad
>
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c
2026-03-16 17:35 ` [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c Fuad Tabba
@ 2026-03-17 17:40 ` Jonathan Cameron
2026-03-17 18:01 ` Fuad Tabba
0 siblings, 1 reply; 21+ messages in thread
From: Jonathan Cameron @ 2026-03-17 17:40 UTC (permalink / raw)
To: Fuad Tabba
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:24 +0000
Fuad Tabba <tabba@google.com> wrote:
> Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
> host_buffers.lock and version_lock to use the guard(hyp_spinlock)
> macro.
>
> This eliminates manual unlock calls on return paths and simplifies
> error handling by replacing goto labels with direct returns.
>
> Change-Id: I52e31c0bed3d2772c800a535af8abdabd81a178b
> Signed-off-by: Fuad Tabba <tabba@google.com>
See below and read the cleanup.h guidance notes on usage at the top.
Specifically:
* Lastly, given that the benefit of cleanup helpers is removal of
* "goto", and that the "goto" statement can jump between scopes, the
* expectation is that usage of "goto" and cleanup helpers is never
* mixed in the same function. I.e. for a given routine, convert all
* resources that need a "goto" cleanup to scope-based cleanup, or
* convert none of them.
Doing otherwise might mean an encounter with a grumpy Linus :)
> ---
> arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++++++++++++------------------------
> 1 file changed, 38 insertions(+), 48 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
> index 94161ea1cd60..0c772501c3ba 100644
> --- a/arch/arm64/kvm/hyp/nvhe/ffa.c
> +++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
> @@ -239,6 +239,8 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> int ret = 0;
> void *rx_virt, *tx_virt;
>
> + guard(hyp_spinlock)(&host_buffers.lock);
> +
> if (npages != (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) {
> ret = FFA_RET_INVALID_PARAMETERS;
> goto out;
> @@ -249,10 +251,9 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> goto out;
> }
>
> - hyp_spin_lock(&host_buffers.lock);
> if (host_buffers.tx) {
> ret = FFA_RET_DENIED;
> - goto out_unlock;
> + goto out;
Whilst it isn't a bug as such because you increased the scope to avoid it,
there is some pretty strong guidance in cleanup.h about not using guard() or
any of the other magic if a function that also contains gotos. That came from
some views Linus expressed pretty clearly about the dangers that brings (the
problem is a goto that crosses a guard() being defined and the risk of
refactors that happen to end up with that + general view that cleanup.h
stuff is about removing gotos, so if you still have them, why bother!
You can usually end up with the best of all worlds via some refactors
to pull out helper functions, but that's a lot of churn.
Jonathan
> }
>
> /*
> @@ -261,7 +262,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> */
> ret = ffa_map_hyp_buffers(npages);
> if (ret)
> - goto out_unlock;
> + goto out;
>
> ret = __pkvm_host_share_hyp(hyp_phys_to_pfn(tx));
> if (ret) {
> @@ -292,8 +293,6 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> host_buffers.tx = tx_virt;
> host_buffers.rx = rx_virt;
>
> -out_unlock:
> - hyp_spin_unlock(&host_buffers.lock);
> out:
> ffa_to_smccc_res(res, ret);
> return;
> @@ -306,7 +305,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> __pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx));
> err_unmap:
> ffa_unmap_hyp_buffers();
> - goto out_unlock;
> + goto out;
> }
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c
2026-03-16 17:35 ` [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c Fuad Tabba
@ 2026-03-17 17:44 ` Jonathan Cameron
0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2026-03-17 17:44 UTC (permalink / raw)
To: Fuad Tabba
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:25 +0000
Fuad Tabba <tabba@google.com> wrote:
> Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
> pkvm_pgd_lock to use the guard(hyp_spinlock) macro.
>
> This eliminates manual unlock calls on return paths and simplifies
> error handling by replacing goto labels with direct returns.
> Note: hyp_fixblock_lock spans across hyp_fixblock_map/unmap functions,
> so it retains explicit lock/unlock semantics to avoid RAII violations.
>
> Change-Id: I6bb3f4105e95480269e5bf8289d084c8f9981730
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/kvm/hyp/nvhe/mm.c | 37 ++++++++++---------------------------
> 1 file changed, 10 insertions(+), 27 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c
> index 218976287d3f..7a15c9fc15e5 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mm.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mm.c
> @@ -35,13 +35,8 @@ static DEFINE_PER_CPU(struct hyp_fixmap_slot, fixmap_slots);
>
> static int __pkvm_alloc_private_va_range(unsigned long start, size_t size)
> @@ -80,10 +75,9 @@ int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr)
> unsigned long addr;
> int ret;
>
> - hyp_spin_lock(&pkvm_pgd_lock);
> + guard(hyp_spinlock)(&pkvm_pgd_lock);
> addr = __io_map_base;
> ret = __pkvm_alloc_private_va_range(addr, size);
> - hyp_spin_unlock(&pkvm_pgd_lock);
>
> *haddr = addr;
Maybe it looses meaning but given this sets *haddr on error or not
can reorder and save a few lines.
addr = __iomap_base;
*haddr = addr;
return __pkvm_alloc_private_va_range(addr, size);
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c
2026-03-16 17:35 ` [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c Fuad Tabba
@ 2026-03-17 17:47 ` Jonathan Cameron
0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2026-03-17 17:47 UTC (permalink / raw)
To: Fuad Tabba
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:26 +0000
Fuad Tabba <tabba@google.com> wrote:
> Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
> the global vm_table_lock to use the guard(hyp_spinlock) macro.
>
> This significantly cleans up validation and error paths during VM
> creation and manipulation by eliminating the need for goto labels
> and manual unlock procedures on early returns.
>
> Change-Id: I894df69b3cfe053a77dd660dfb70c95640c6d70c
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>
> struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle)
> @@ -613,9 +605,8 @@ static int insert_vm_table_entry(pkvm_handle_t handle,
> {
> int ret;
>
> - hyp_spin_lock(&vm_table_lock);
> + guard(hyp_spinlock)(&vm_table_lock);
> ret = __insert_vm_table_entry(handle, hyp_vm);
> - hyp_spin_unlock(&vm_table_lock);
>
> return ret;
return __insert_vm_table_entry();
> }
> @@ -815,35 +804,35 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
> if (!hyp_vcpu)
> return -ENOMEM;
>
> - hyp_spin_lock(&vm_table_lock);
> + scoped_guard(hyp_spinlock, &vm_table_lock) {
> + hyp_vm = get_vm_by_handle(handle);
> + if (!hyp_vm) {
> + ret = -ENOENT;
> + goto err_unmap;
As in earlier patch. I'd not mix gotos and guard()s.
> + }
>
> - hyp_vm = get_vm_by_handle(handle);
> - if (!hyp_vm) {
> - ret = -ENOENT;
> - goto unlock;
> + ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
> + if (ret)
> + goto err_unmap;
> +
> + idx = hyp_vcpu->vcpu.vcpu_idx;
> + if (idx >= hyp_vm->kvm.created_vcpus) {
> + ret = -EINVAL;
> + goto err_unmap;
> + }
> +
> + if (hyp_vm->vcpus[idx]) {
> + ret = -EINVAL;
> + goto err_unmap;
> + }
> +
> + hyp_vm->vcpus[idx] = hyp_vcpu;
> +
> + return 0;
> }
>
> - ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
> - if (ret)
> - goto unlock;
> -
> - idx = hyp_vcpu->vcpu.vcpu_idx;
> - if (idx >= hyp_vm->kvm.created_vcpus) {
> - ret = -EINVAL;
> - goto unlock;
> - }
> -
> - if (hyp_vm->vcpus[idx]) {
> - ret = -EINVAL;
> - goto unlock;
> - }
> -
> - hyp_vm->vcpus[idx] = hyp_vcpu;
> -unlock:
> - hyp_spin_unlock(&vm_table_lock);
> -
> - if (ret)
> - unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
> +err_unmap:
> + unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
> return ret;
> }
>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c
2026-03-16 17:35 ` [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c Fuad Tabba
@ 2026-03-17 17:50 ` Jonathan Cameron
0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2026-03-17 17:50 UTC (permalink / raw)
To: Fuad Tabba
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:27 +0000
Fuad Tabba <tabba@google.com> wrote:
> Migrate manual mutex_lock() and mutex_unlock() calls managing
> kvm_hyp_pgd_mutex and hyp_shared_pfns_lock to use the
> guard(mutex) macro.
>
> This eliminates manual unlock calls on return paths and simplifies
> error handling by replacing unlock goto labels with direct returns.
> Centralized cleanup goto paths are preserved with manual unlocks
> removed.
>
> Change-Id: Ib0f33a474eb84f19da4de0858c77751bbe55dfbb
> Signed-off-by: Fuad Tabba <tabba@google.com>
> @@ -652,22 +632,20 @@ int hyp_alloc_private_va_range(size_t size, unsigned long *haddr)
> unsigned long base;
> int ret = 0;
>
> - mutex_lock(&kvm_hyp_pgd_mutex);
> -
> - /*
> - * This assumes that we have enough space below the idmap
> - * page to allocate our VAs. If not, the check in
> - * __hyp_alloc_private_va_range() will kick. A potential
> - * alternative would be to detect that overflow and switch
> - * to an allocation above the idmap.
> - *
> - * The allocated size is always a multiple of PAGE_SIZE.
> - */
> - size = PAGE_ALIGN(size);
> - base = io_map_base - size;
> - ret = __hyp_alloc_private_va_range(base);
> -
> - mutex_unlock(&kvm_hyp_pgd_mutex);
> + scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
> + /*
> + * This assumes that we have enough space below the idmap
> + * page to allocate our VAs. If not, the check in
> + * __hyp_alloc_private_va_range() will kick. A potential
> + * alternative would be to detect that overflow and switch
> + * to an allocation above the idmap.
> + *
> + * The allocated size is always a multiple of PAGE_SIZE.
> + */
> + size = PAGE_ALIGN(size);
> + base = io_map_base - size;
> + ret = __hyp_alloc_private_va_range(base);
Minor one and matter of taste, but I'd do
if (ret)
return ret;
}
*hwaddr = base;
return 0;
> + }
>
> if (!ret)
> *haddr = base;
> @@ -711,17 +689,16 @@ int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr)
> size_t size;
> int ret;
>
> - mutex_lock(&kvm_hyp_pgd_mutex);
> - /*
> - * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
> - * an alignment of our allocation on the order of the size.
> - */
> - size = NVHE_STACK_SIZE * 2;
> - base = ALIGN_DOWN(io_map_base - size, size);
> + scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
> + /*
> + * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
> + * an alignment of our allocation on the order of the size.
> + */
> + size = NVHE_STACK_SIZE * 2;
> + base = ALIGN_DOWN(io_map_base - size, size);
>
> - ret = __hyp_alloc_private_va_range(base);
> -
> - mutex_unlock(&kvm_hyp_pgd_mutex);
> + ret = __hyp_alloc_private_va_range(base);
> + }
>
> if (ret) {
> kvm_err("Cannot allocate hyp stack guard page\n");
Maybe move this under the guard just to keep the error check nearer the code
in question.
Thanks,
Jonathan
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c
2026-03-16 17:35 ` [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c Fuad Tabba
@ 2026-03-17 17:53 ` Jonathan Cameron
0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2026-03-17 17:53 UTC (permalink / raw)
To: Fuad Tabba
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:28 +0000
Fuad Tabba <tabba@google.com> wrote:
> Migrate manual spin_lock() calls managing mp_state_lock and manual
> mutex_lock() calls managing kvm->arch.config_lock to use the
> guard(spinlock) and guard(mutex) macros.
>
> This eliminates manual unlock calls on early return paths and
> simplifies the vCPU suspend/resume control flow.
>
> Change-Id: Ifcd8455d08afa5d00fc200daaa3fb13f6736e6ed
> Signed-off-by: Fuad Tabba <tabba@google.com>
Trivial stuff.
> ---
> arch/arm64/kvm/arm.c | 53 ++++++++++++++++++++--------------------------------
> 1 file changed, 20 insertions(+), 33 deletions(-)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 410ffd41fd73..017f5bfabe19 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
>
> /**
> @@ -738,9 +736,8 @@ static void __kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
>
> void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
> {
> - spin_lock(&vcpu->arch.mp_state_lock);
> + guard(spinlock)(&vcpu->arch.mp_state_lock);
> __kvm_arm_vcpu_power_off(vcpu);
> - spin_unlock(&vcpu->arch.mp_state_lock);
> }
>
> bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu)
> @@ -773,7 +770,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> {
> int ret = 0;
>
> - spin_lock(&vcpu->arch.mp_state_lock);
> + guard(spinlock)(&vcpu->arch.mp_state_lock);
>
> switch (mp_state->mp_state) {
> case KVM_MP_STATE_RUNNABLE:
> @@ -789,8 +786,6 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
> ret = -EINVAL;
return -EINVAL;
}
return 0;
Or even return in the switch legs instead of breaking out.
> }
>
> - spin_unlock(&vcpu->arch.mp_state_lock);
> -
> return ret;
> }
> /*
> @@ -944,9 +937,8 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
> return ret;
> }
>
> - mutex_lock(&kvm->arch.config_lock);
> + guard(mutex)(&kvm->arch.config_lock);
> set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
> - mutex_unlock(&kvm->arch.config_lock);
>
> return ret;
Personally I'd make this return 0 whilst you are here so it's obvious this
path only ever returns success.
> }
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c
2026-03-16 17:35 ` [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c Fuad Tabba
@ 2026-03-17 17:54 ` Jonathan Cameron
0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Cameron @ 2026-03-17 17:54 UTC (permalink / raw)
To: Fuad Tabba
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
On Mon, 16 Mar 2026 17:35:29 +0000
Fuad Tabba <tabba@google.com> wrote:
> Migrate manual spin_lock() and spin_unlock() calls managing
> the vcpu->arch.mp_state_lock to use the guard(spinlock) macro.
>
> This eliminates manual unlock calls on return paths and simplifies
> error handling during PSCI calls by replacing unlock goto labels
> with direct returns.
>
> Change-Id: Iaf72da18b18aaec8edff91bc30379bed9dd04b2b
> Signed-off-by: Fuad Tabba <tabba@google.com>
> @@ -176,9 +172,8 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type, u64 flags)
> * re-initialized.
> */
> kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> - spin_lock(&tmp->arch.mp_state_lock);
> - WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
> - spin_unlock(&tmp->arch.mp_state_lock);
> + scoped_guard(spinlock, &tmp->arch.mp_state_lock)
No benefit over guard() and causes more churn.
> + WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
> }
> kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c
2026-03-17 17:40 ` Jonathan Cameron
@ 2026-03-17 18:01 ` Fuad Tabba
0 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2026-03-17 18:01 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64,
KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64, open list
Hi Jonathan,
On Tue, 17 Mar 2026 at 17:40, Jonathan Cameron
<jonathan.cameron@huawei.com> wrote:
>
> On Mon, 16 Mar 2026 17:35:24 +0000
> Fuad Tabba <tabba@google.com> wrote:
>
> > Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing
> > host_buffers.lock and version_lock to use the guard(hyp_spinlock)
> > macro.
> >
> > This eliminates manual unlock calls on return paths and simplifies
> > error handling by replacing goto labels with direct returns.
> >
> > Change-Id: I52e31c0bed3d2772c800a535af8abdabd81a178b
> > Signed-off-by: Fuad Tabba <tabba@google.com>
>
> See below and read the cleanup.h guidance notes on usage at the top.
> Specifically:
>
> * Lastly, given that the benefit of cleanup helpers is removal of
> * "goto", and that the "goto" statement can jump between scopes, the
> * expectation is that usage of "goto" and cleanup helpers is never
> * mixed in the same function. I.e. for a given routine, convert all
> * resources that need a "goto" cleanup to scope-based cleanup, or
> * convert none of them.
>
> Doing otherwise might mean an encounter with a grumpy Linus :)
I should have read the cleanup.h guidance more closely, thanks for
pointing this out.
> > ---
> > arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++++++++++++------------------------
> > 1 file changed, 38 insertions(+), 48 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c
> > index 94161ea1cd60..0c772501c3ba 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/ffa.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/ffa.c
> > @@ -239,6 +239,8 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> > int ret = 0;
> > void *rx_virt, *tx_virt;
> >
> > + guard(hyp_spinlock)(&host_buffers.lock);
> > +
> > if (npages != (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) {
> > ret = FFA_RET_INVALID_PARAMETERS;
> > goto out;
> > @@ -249,10 +251,9 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> > goto out;
> > }
> >
> > - hyp_spin_lock(&host_buffers.lock);
> > if (host_buffers.tx) {
> > ret = FFA_RET_DENIED;
> > - goto out_unlock;
> > + goto out;
> Whilst it isn't a bug as such because you increased the scope to avoid it,
> there is some pretty strong guidance in cleanup.h about not using guard() or
> any of the other magic if a function that also contains gotos. That came from
> some views Linus expressed pretty clearly about the dangers that brings (the
> problem is a goto that crosses a guard() being defined and the risk of
> refactors that happen to end up with that + general view that cleanup.h
> stuff is about removing gotos, so if you still have them, why bother!
>
> You can usually end up with the best of all worlds via some refactors
> to pull out helper functions, but that's a lot of churn.
If I respin this series (depending on whether the maintainers think
it's worth the hassle), I'll remove all changes that just add churn.
Thanks for having a look at this and the other ones.
Cheers,
/fuad
>
> Jonathan
>
>
> > }
> >
> > /*
> > @@ -261,7 +262,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> > */
> > ret = ffa_map_hyp_buffers(npages);
> > if (ret)
> > - goto out_unlock;
> > + goto out;
> >
> > ret = __pkvm_host_share_hyp(hyp_phys_to_pfn(tx));
> > if (ret) {
> > @@ -292,8 +293,6 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> > host_buffers.tx = tx_virt;
> > host_buffers.rx = rx_virt;
> >
> > -out_unlock:
> > - hyp_spin_unlock(&host_buffers.lock);
> > out:
> > ffa_to_smccc_res(res, ret);
> > return;
> > @@ -306,7 +305,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res,
> > __pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx));
> > err_unmap:
> > ffa_unmap_hyp_buffers();
> > - goto out_unlock;
> > + goto out;
> > }
> >
>
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2026-03-17 18:01 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-16 17:35 ` [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock Fuad Tabba
2026-03-16 17:35 ` [PATCH 02/10] KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c Fuad Tabba
2026-03-16 17:35 ` [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c Fuad Tabba
2026-03-17 17:40 ` Jonathan Cameron
2026-03-17 18:01 ` Fuad Tabba
2026-03-16 17:35 ` [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c Fuad Tabba
2026-03-17 17:44 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c Fuad Tabba
2026-03-17 17:47 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c Fuad Tabba
2026-03-17 17:50 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c Fuad Tabba
2026-03-17 17:53 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c Fuad Tabba
2026-03-17 17:54 ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 09/10] KVM: arm64: Use guard(spinlock) in reset.c Fuad Tabba
2026-03-16 17:35 ` [PATCH 10/10] KVM: arm64: Use guard(mutex) in pkvm.c Fuad Tabba
2026-03-16 17:39 ` [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-17 8:20 ` Marc Zyngier
2026-03-17 9:06 ` Fuad Tabba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox