From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 723A5FED9F9 for ; Tue, 17 Mar 2026 17:40:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:CC:To: From:Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qcvQdpss74FO7Nhzqshq1m2lEELywPfK4fgcnJgJWdQ=; b=kC0OJGnWTEXr2gsnlWjmjpgEtg zuzOckjRnHV+Pjd826GjvKNIZzrmjuMB05d2MW7/OFOf09Ljk0slQWLS2IcBog54GuwiPsGJ1mJLT VfAhCDO5aPEdcYAMxQ+hPBrmVA6h0G1uY22ZLKVUh6772Od2PsQVuhld01ffDAMa3ixdKbCXoPbVw 6QU0I66/G8947HhNQn2S+V4lPOu6ewHTQnFBKORbCCT7sxeLQqQiYR0PDNW4qs0mCoWrEED8Kjy9J PzBHDebBCDcYUss665AUnL0eGtL/SuOIVX9dttTpAMHfx6EDYsbIkFZGcOt67sA7jeFQDKqU4q4r3 zb22BCaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1w2YP4-00000006xMJ-2u5t; Tue, 17 Mar 2026 17:40:26 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1w2YP2-00000006xLv-0ovZ for linux-arm-kernel@lists.infradead.org; Tue, 17 Mar 2026 17:40:26 +0000 Received: from mail.maildlp.com (unknown [172.18.224.83]) by frasgout.his.huawei.com (SkyGuard) with ESMTPS id 4fZzk534l5zHnGck; Wed, 18 Mar 2026 01:39:57 +0800 (CST) Received: from dubpeml500005.china.huawei.com (unknown [7.214.145.207]) by mail.maildlp.com (Postfix) with ESMTPS id 5F19B40572; Wed, 18 Mar 2026 01:40:17 +0800 (CST) Received: from localhost (10.48.149.62) by dubpeml500005.china.huawei.com (7.214.145.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 17 Mar 2026 17:40:16 +0000 Date: Tue, 17 Mar 2026 17:40:13 +0000 From: Jonathan Cameron To: Fuad Tabba CC: Marc Zyngier , Oliver Upton , "Joey Gouly" , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64 , "open list" Subject: Re: [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c Message-ID: <20260317174013.00007622@huawei.com> In-Reply-To: <20260316-tabba-el2_guard-v1-3-456875a2c6db@google.com> References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> <20260316-tabba-el2_guard-v1-3-456875a2c6db@google.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.48.149.62] X-ClientProxiedBy: lhrpeml500010.china.huawei.com (7.191.174.240) To dubpeml500005.china.huawei.com (7.214.145.207) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260317_104024_567519_FBDD1899 X-CRM114-Status: GOOD ( 18.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, 16 Mar 2026 17:35:24 +0000 Fuad Tabba wrote: > Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing > host_buffers.lock and version_lock to use the guard(hyp_spinlock) > macro. > > This eliminates manual unlock calls on return paths and simplifies > error handling by replacing goto labels with direct returns. > > Change-Id: I52e31c0bed3d2772c800a535af8abdabd81a178b > Signed-off-by: Fuad Tabba See below and read the cleanup.h guidance notes on usage at the top. Specifically: * Lastly, given that the benefit of cleanup helpers is removal of * "goto", and that the "goto" statement can jump between scopes, the * expectation is that usage of "goto" and cleanup helpers is never * mixed in the same function. I.e. for a given routine, convert all * resources that need a "goto" cleanup to scope-based cleanup, or * convert none of them. Doing otherwise might mean an encounter with a grumpy Linus :) > --- > arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++++++++++++------------------------ > 1 file changed, 38 insertions(+), 48 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c > index 94161ea1cd60..0c772501c3ba 100644 > --- a/arch/arm64/kvm/hyp/nvhe/ffa.c > +++ b/arch/arm64/kvm/hyp/nvhe/ffa.c > @@ -239,6 +239,8 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res, > int ret = 0; > void *rx_virt, *tx_virt; > > + guard(hyp_spinlock)(&host_buffers.lock); > + > if (npages != (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) { > ret = FFA_RET_INVALID_PARAMETERS; > goto out; > @@ -249,10 +251,9 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res, > goto out; > } > > - hyp_spin_lock(&host_buffers.lock); > if (host_buffers.tx) { > ret = FFA_RET_DENIED; > - goto out_unlock; > + goto out; Whilst it isn't a bug as such because you increased the scope to avoid it, there is some pretty strong guidance in cleanup.h about not using guard() or any of the other magic if a function that also contains gotos. That came from some views Linus expressed pretty clearly about the dangers that brings (the problem is a goto that crosses a guard() being defined and the risk of refactors that happen to end up with that + general view that cleanup.h stuff is about removing gotos, so if you still have them, why bother! You can usually end up with the best of all worlds via some refactors to pull out helper functions, but that's a lot of churn. Jonathan > } > > /* > @@ -261,7 +262,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res, > */ > ret = ffa_map_hyp_buffers(npages); > if (ret) > - goto out_unlock; > + goto out; > > ret = __pkvm_host_share_hyp(hyp_phys_to_pfn(tx)); > if (ret) { > @@ -292,8 +293,6 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res, > host_buffers.tx = tx_virt; > host_buffers.rx = rx_virt; > > -out_unlock: > - hyp_spin_unlock(&host_buffers.lock); > out: > ffa_to_smccc_res(res, ret); > return; > @@ -306,7 +305,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *res, > __pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx)); > err_unmap: > ffa_unmap_hyp_buffers(); > - goto out_unlock; > + goto out; > } >