From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18190FED9EF for ; Tue, 17 Mar 2026 17:47:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:CC:To: From:Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=AzLZaulXv0//FYFikLLGde2bxGA/CFV1Y+y+wpw0H+0=; b=Te0zFLd8sj7WEpgZ/Mk6Fqvr9J GvJQvBooX5RTseuy+wO/RovfD9wMFcirVJqzSRUtZwUMFetpbOz5wK+gx2IfRGbdYWqtw3EAROkVY NeItvmKrS3tbN9NceUvGLQKXedZZH8hEaea7NXwVBKlPCL0hs7XvrcfUnVeBoyQDJD7edFwPHldpg IXzyQEAPkCJ39mQHjqpF/gNYlqQsFw0g60TDhWRoeqVNLWiVXR8qdJJVFbjTqO901MTRY1/21AK20 t8jo1j/9LTWCp7c8VuQl5puJyhwITz9tX2P8iwmaJfHe1a2DdrHZnFiODVryX7tjciu7GjEvNsxUr n1x47iSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1w2YWA-00000006xuj-2JLh; Tue, 17 Mar 2026 17:47:46 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1w2YW7-00000006xth-04QS for linux-arm-kernel@lists.infradead.org; Tue, 17 Mar 2026 17:47:45 +0000 Received: from mail.maildlp.com (unknown [172.18.224.83]) by frasgout.his.huawei.com (SkyGuard) with ESMTPS id 4fZzss6hM9zJ4692; Wed, 18 Mar 2026 01:46:41 +0800 (CST) Received: from dubpeml500005.china.huawei.com (unknown [7.214.145.207]) by mail.maildlp.com (Postfix) with ESMTPS id 5A76340572; Wed, 18 Mar 2026 01:47:38 +0800 (CST) Received: from localhost (10.48.149.62) by dubpeml500005.china.huawei.com (7.214.145.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 17 Mar 2026 17:47:37 +0000 Date: Tue, 17 Mar 2026 17:47:36 +0000 From: Jonathan Cameron To: Fuad Tabba CC: Marc Zyngier , Oliver Upton , "Joey Gouly" , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64 , "open list" Subject: Re: [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c Message-ID: <20260317174736.00004b1a@huawei.com> In-Reply-To: <20260316-tabba-el2_guard-v1-5-456875a2c6db@google.com> References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> <20260316-tabba-el2_guard-v1-5-456875a2c6db@google.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.48.149.62] X-ClientProxiedBy: lhrpeml500010.china.huawei.com (7.191.174.240) To dubpeml500005.china.huawei.com (7.214.145.207) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260317_104743_363228_A401E60E X-CRM114-Status: GOOD ( 11.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, 16 Mar 2026 17:35:26 +0000 Fuad Tabba wrote: > Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing > the global vm_table_lock to use the guard(hyp_spinlock) macro. > > This significantly cleans up validation and error paths during VM > creation and manipulation by eliminating the need for goto labels > and manual unlock procedures on early returns. > > Change-Id: I894df69b3cfe053a77dd660dfb70c95640c6d70c > Signed-off-by: Fuad Tabba > --- > > struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle) > @@ -613,9 +605,8 @@ static int insert_vm_table_entry(pkvm_handle_t handle, > { > int ret; > > - hyp_spin_lock(&vm_table_lock); > + guard(hyp_spinlock)(&vm_table_lock); > ret = __insert_vm_table_entry(handle, hyp_vm); > - hyp_spin_unlock(&vm_table_lock); > > return ret; return __insert_vm_table_entry(); > } > @@ -815,35 +804,35 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, > if (!hyp_vcpu) > return -ENOMEM; > > - hyp_spin_lock(&vm_table_lock); > + scoped_guard(hyp_spinlock, &vm_table_lock) { > + hyp_vm = get_vm_by_handle(handle); > + if (!hyp_vm) { > + ret = -ENOENT; > + goto err_unmap; As in earlier patch. I'd not mix gotos and guard()s. > + } > > - hyp_vm = get_vm_by_handle(handle); > - if (!hyp_vm) { > - ret = -ENOENT; > - goto unlock; > + ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu); > + if (ret) > + goto err_unmap; > + > + idx = hyp_vcpu->vcpu.vcpu_idx; > + if (idx >= hyp_vm->kvm.created_vcpus) { > + ret = -EINVAL; > + goto err_unmap; > + } > + > + if (hyp_vm->vcpus[idx]) { > + ret = -EINVAL; > + goto err_unmap; > + } > + > + hyp_vm->vcpus[idx] = hyp_vcpu; > + > + return 0; > } > > - ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu); > - if (ret) > - goto unlock; > - > - idx = hyp_vcpu->vcpu.vcpu_idx; > - if (idx >= hyp_vm->kvm.created_vcpus) { > - ret = -EINVAL; > - goto unlock; > - } > - > - if (hyp_vm->vcpus[idx]) { > - ret = -EINVAL; > - goto unlock; > - } > - > - hyp_vm->vcpus[idx] = hyp_vcpu; > -unlock: > - hyp_spin_unlock(&vm_table_lock); > - > - if (ret) > - unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); > +err_unmap: > + unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); > return ret; > } > >