All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>, Oliver Upton <oupton@kernel.org>,
	"Joey Gouly" <joey.gouly@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Zenghui Yu <yuzenghui@huawei.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	"KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64"
	<linux-arm-kernel@lists.infradead.org>,
	KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64
	<kvmarm@lists.linux.dev>,
	"open list" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c
Date: Tue, 17 Mar 2026 17:50:47 +0000	[thread overview]
Message-ID: <20260317175047.00000b6e@huawei.com> (raw)
In-Reply-To: <20260316-tabba-el2_guard-v1-6-456875a2c6db@google.com>

On Mon, 16 Mar 2026 17:35:27 +0000
Fuad Tabba <tabba@google.com> wrote:

> Migrate manual mutex_lock() and mutex_unlock() calls managing
> kvm_hyp_pgd_mutex and hyp_shared_pfns_lock to use the
> guard(mutex) macro.
> 
> This eliminates manual unlock calls on return paths and simplifies
> error handling by replacing unlock goto labels with direct returns.
> Centralized cleanup goto paths are preserved with manual unlocks
> removed.
> 
> Change-Id: Ib0f33a474eb84f19da4de0858c77751bbe55dfbb
> Signed-off-by: Fuad Tabba <tabba@google.com>

> @@ -652,22 +632,20 @@ int hyp_alloc_private_va_range(size_t size, unsigned long *haddr)
>  	unsigned long base;
>  	int ret = 0;
>  
> -	mutex_lock(&kvm_hyp_pgd_mutex);
> -
> -	/*
> -	 * This assumes that we have enough space below the idmap
> -	 * page to allocate our VAs. If not, the check in
> -	 * __hyp_alloc_private_va_range() will kick. A potential
> -	 * alternative would be to detect that overflow and switch
> -	 * to an allocation above the idmap.
> -	 *
> -	 * The allocated size is always a multiple of PAGE_SIZE.
> -	 */
> -	size = PAGE_ALIGN(size);
> -	base = io_map_base - size;
> -	ret = __hyp_alloc_private_va_range(base);
> -
> -	mutex_unlock(&kvm_hyp_pgd_mutex);
> +	scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
> +		/*
> +		 * This assumes that we have enough space below the idmap
> +		 * page to allocate our VAs. If not, the check in
> +		 * __hyp_alloc_private_va_range() will kick. A potential
> +		 * alternative would be to detect that overflow and switch
> +		 * to an allocation above the idmap.
> +		 *
> +		 * The allocated size is always a multiple of PAGE_SIZE.
> +		 */
> +		size = PAGE_ALIGN(size);
> +		base = io_map_base - size;
> +		ret = __hyp_alloc_private_va_range(base);
Minor one and matter of taste, but I'd do
		if (ret)
			return ret;
	}
	*hwaddr = base;

	return 0;

> +	}
>  
>  	if (!ret)
>  		*haddr = base;
> @@ -711,17 +689,16 @@ int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr)
>  	size_t size;
>  	int ret;
>  
> -	mutex_lock(&kvm_hyp_pgd_mutex);
> -	/*
> -	 * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
> -	 * an alignment of our allocation on the order of the size.
> -	 */
> -	size = NVHE_STACK_SIZE * 2;
> -	base = ALIGN_DOWN(io_map_base - size, size);
> +	scoped_guard(mutex, &kvm_hyp_pgd_mutex) {
> +		/*
> +		 * Efficient stack verification using the NVHE_STACK_SHIFT bit implies
> +		 * an alignment of our allocation on the order of the size.
> +		 */
> +		size = NVHE_STACK_SIZE * 2;
> +		base = ALIGN_DOWN(io_map_base - size, size);
>  
> -	ret = __hyp_alloc_private_va_range(base);
> -
> -	mutex_unlock(&kvm_hyp_pgd_mutex);
> +		ret = __hyp_alloc_private_va_range(base);
> +	}
>  
>  	if (ret) {
>  		kvm_err("Cannot allocate hyp stack guard page\n");
Maybe move this under the guard just to keep the error check nearer the code
in question.

Thanks,

Jonathan

> 


  reply	other threads:[~2026-03-17 17:50 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-16 17:35 [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-16 17:35 ` [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock Fuad Tabba
2026-03-16 17:35 ` [PATCH 02/10] KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c Fuad Tabba
2026-03-16 17:35 ` [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c Fuad Tabba
2026-03-17 17:40   ` Jonathan Cameron
2026-03-17 18:01     ` Fuad Tabba
2026-03-16 17:35 ` [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c Fuad Tabba
2026-03-17 17:44   ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c Fuad Tabba
2026-03-17 17:47   ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c Fuad Tabba
2026-03-17 17:50   ` Jonathan Cameron [this message]
2026-03-16 17:35 ` [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c Fuad Tabba
2026-03-17 17:53   ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c Fuad Tabba
2026-03-17 17:54   ` Jonathan Cameron
2026-03-16 17:35 ` [PATCH 09/10] KVM: arm64: Use guard(spinlock) in reset.c Fuad Tabba
2026-03-16 17:35 ` [PATCH 10/10] KVM: arm64: Use guard(mutex) in pkvm.c Fuad Tabba
2026-03-16 17:39 ` [PATCH 00/10] KVM: arm64: Adopt scoped resource management (guard) for EL1 and EL2 Fuad Tabba
2026-03-17  8:20 ` Marc Zyngier
2026-03-17  9:06   ` Fuad Tabba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260317175047.00000b6e@huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=catalin.marinas@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oupton@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.