From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50AC7364E9F for ; Tue, 17 Mar 2026 17:44:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773769476; cv=none; b=gFZkBkzjfTtVJOxnvHJd6NyxprxRlud592+2r3YuoMUIbZB4OSDB1IPL6kZPBb3ULLRW392hkCgYEcThTivxvzTLNSuwHWsGB+wYu7cyrahoCSd63qhhb8V5p/vlC64yNUgEipwg6NrulVL51lWkx/2ugj5gI649Jj085mCwu/w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773769476; c=relaxed/simple; bh=aRXwsuxYFDtjwyajBwNEVNNvTawkga2M1j8033n4bnY=; h=Date:From:To:CC:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=iDB6yLa4iNuThTqDp+3m5091I8mTPqWIo2GZcQM/2f8Z7f7ka0Lgp8HPJP6GLYoRw2AbBzZgIaei/SJKRYnJn4JCIgorCWH+wF8jrVvM5i8l9hSyD1SQDnsSiuluVI/msirOsZ9Z6Qx8GOAUq5rKZIymMS75ECcBSTDaRwHJlgk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.224.150]) by frasgout.his.huawei.com (SkyGuard) with ESMTPS id 4fZzpJ5xtRzJ46C6; Wed, 18 Mar 2026 01:43:36 +0800 (CST) Received: from dubpeml500005.china.huawei.com (unknown [7.214.145.207]) by mail.maildlp.com (Postfix) with ESMTPS id 42B8C4056A; Wed, 18 Mar 2026 01:44:33 +0800 (CST) Received: from localhost (10.48.149.62) by dubpeml500005.china.huawei.com (7.214.145.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 17 Mar 2026 17:44:31 +0000 Date: Tue, 17 Mar 2026 17:44:30 +0000 From: Jonathan Cameron To: Fuad Tabba CC: Marc Zyngier , Oliver Upton , "Joey Gouly" , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64 , "open list" Subject: Re: [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c Message-ID: <20260317174430.00001742@huawei.com> In-Reply-To: <20260316-tabba-el2_guard-v1-4-456875a2c6db@google.com> References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> <20260316-tabba-el2_guard-v1-4-456875a2c6db@google.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-w64-mingw32) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-ClientProxiedBy: lhrpeml500010.china.huawei.com (7.191.174.240) To dubpeml500005.china.huawei.com (7.214.145.207) On Mon, 16 Mar 2026 17:35:25 +0000 Fuad Tabba wrote: > Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing > pkvm_pgd_lock to use the guard(hyp_spinlock) macro. > > This eliminates manual unlock calls on return paths and simplifies > error handling by replacing goto labels with direct returns. > Note: hyp_fixblock_lock spans across hyp_fixblock_map/unmap functions, > so it retains explicit lock/unlock semantics to avoid RAII violations. > > Change-Id: I6bb3f4105e95480269e5bf8289d084c8f9981730 > Signed-off-by: Fuad Tabba > --- > arch/arm64/kvm/hyp/nvhe/mm.c | 37 ++++++++++--------------------------- > 1 file changed, 10 insertions(+), 27 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c > index 218976287d3f..7a15c9fc15e5 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mm.c > +++ b/arch/arm64/kvm/hyp/nvhe/mm.c > @@ -35,13 +35,8 @@ static DEFINE_PER_CPU(struct hyp_fixmap_slot, fixmap_slots); > > static int __pkvm_alloc_private_va_range(unsigned long start, size_t size) > @@ -80,10 +75,9 @@ int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr) > unsigned long addr; > int ret; > > - hyp_spin_lock(&pkvm_pgd_lock); > + guard(hyp_spinlock)(&pkvm_pgd_lock); > addr = __io_map_base; > ret = __pkvm_alloc_private_va_range(addr, size); > - hyp_spin_unlock(&pkvm_pgd_lock); > > *haddr = addr; Maybe it looses meaning but given this sets *haddr on error or not can reorder and save a few lines. addr = __iomap_base; *haddr = addr; return __pkvm_alloc_private_va_range(addr, size);