From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9168BC6FD1C for ; Wed, 22 Mar 2023 12:03:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:Cc:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=X/7bFgVugJSV+adkR2EV8+n7AInf3qia6Gax0VvwZeg=; b=kRGwbNuSDdiiFc +qhNNE6+D7eHRGLFvhaD5Z0rexL8OCv8zZMNySL/MGTt9BuLR/R4a8MrUzPTw83m/ZoDffLajQkHG G3tjoSQT3z+WB0btb+Qpz0+NQDhABO0u3+BgybWLxJHuTcpCrN0OgyupSkaiHnTLVudmHaCMIrVWZ 5uRwRCK5Sidn6oAu7hQz3Z4w4hcIf5Fi2sBo/uKK7/eqI+jJ9G8cuThf8RMYnOMi5KwhjCEao9Z6f +eOCwuTg4z3KA7n/kEsNn22FcFFOu3oB7ctwLVT1Ad5WrHBaTCVuAuNXV5qQKw2kO9Lht/q5J3jFa 6sUCoEL+n9YMl0LzU7Rg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pexB6-00FrEi-27; Wed, 22 Mar 2023 12:02:52 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pexB3-00FrDO-09 for linux-arm-kernel@lists.infradead.org; Wed, 22 Mar 2023 12:02:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 78BB26204E; Wed, 22 Mar 2023 12:02:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD6C5C433D2; Wed, 22 Mar 2023 12:02:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679486567; bh=qM1FPasfIwj0EwLyOE2cN6myQZ6APEQg0NlgzGUqKUc=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=tFX3Wza104Tnehs1pE5Xx/2ijRVietzj0Ljq4kWFXudqICRDWR8VZRQ2ngmLbKPYR ETalXAgeBMR+AmCSU9+S3Bi4SNPLrJH263i1L785CC2taEeUG2FpJGlRYydZ07epX6 hTBiqPTPhF+h6iOJ9VfqBvyY8GiHD9PNPG0m6e/TC3/OEdOAS59yDW5J3SQhQwZeTK b6SMDIqZhizS+24iWnPhnfR1XIcCWhDLCM5aQjhWz7Aw5vT9l9RlnJFp1w+kDLkWAp 3ozm8MhIH5l03mL8kn7QX3IrBHedgSG0njBBBBEpv/N34GzGRVaiYiEwz7nHP4QIft hZxzNCxIzkSQQ== Received: from [206.0.71.16] (helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1pexAy-002CJk-J3; Wed, 22 Mar 2023 12:02:45 +0000 Date: Wed, 22 Mar 2023 12:02:40 +0000 Message-ID: <87lejpgfj3.wl-maz@kernel.org> From: Marc Zyngier To: Oliver Upton Cc: James Morse , Suzuki K Poulose , kvmarm@lists.linux.dev, Zenghui Yu , linux-arm-kernel@lists.infradead.org, Sean Christopherson , Jeremy Linton Subject: Re: [PATCH v2 2/4] KVM: arm64: Avoid lock inversion when setting the VM register width In-Reply-To: <20230316211412.2651555-3-oliver.upton@linux.dev> References: <20230316211412.2651555-1-oliver.upton@linux.dev> <20230316211412.2651555-3-oliver.upton@linux.dev> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 206.0.71.16 X-SA-Exim-Rcpt-To: oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, kvmarm@lists.linux.dev, yuzenghui@huawei.com, linux-arm-kernel@lists.infradead.org, seanjc@google.com, jeremy.linton@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230322_050249_165614_ED7C6C5A X-CRM114-Status: GOOD ( 31.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, 16 Mar 2023 21:14:10 +0000, Oliver Upton wrote: > > kvm->lock must be taken outside of the vcpu->mutex. Of course, the > locking documentation for KVM makes this abundantly clear. Nonetheless, > the locking order in KVM/arm64 has been wrong for quite a while; we > acquire the kvm->lock while holding the vcpu->mutex all over the shop. > > All was seemingly fine until commit 42a90008f890 ("KVM: Ensure lockdep > knows about kvm->lock vs. vcpu->mutex ordering rule") caught us with our > pants down, leading to lockdep barfing: > > ====================================================== > WARNING: possible circular locking dependency detected > 6.2.0-rc7+ #19 Not tainted > ------------------------------------------------------ > qemu-system-aar/859 is trying to acquire lock: > ffff5aa69269eba0 (&host_kvm->lock){+.+.}-{3:3}, at: kvm_reset_vcpu+0x34/0x274 > > but task is already holding lock: > ffff5aa68768c0b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x8c/0xba0 > > which lock already depends on the new lock. > > Add a dedicated lock to serialize writes to VM-scoped configuration from > the context of a vCPU. Protect the register width flags with the new > lock, thus avoiding the need to grab the kvm->lock while holding > vcpu->mutex in kvm_reset_vcpu(). > > Reported-by: Jeremy Linton > Link: https://lore.kernel.org/kvmarm/f6452cdd-65ff-34b8-bab0-5c06416da5f6@arm.com/ > Signed-off-by: Oliver Upton > --- > arch/arm64/include/asm/kvm_host.h | 3 +++ > arch/arm64/kvm/arm.c | 18 ++++++++++++++++++ > arch/arm64/kvm/reset.c | 6 +++--- > 3 files changed, 24 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 917586237a4d..1f4b9708a775 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -185,6 +185,9 @@ struct kvm_protected_vm { > }; > > struct kvm_arch { > + /* Protects VM-scoped configuration data */ > + struct mutex config_lock; > + nit: can we move this down into the structure and keep the MM stuff on its own at the top? Placing it next to the flags would make some sense, as these flags are definitely related to configuration matters. > struct kvm_s2_mmu mmu; > > /* VTCR_EL2 value for this VM */ > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 731a78f85915..1478bec52767 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -128,6 +128,16 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > { > int ret; > > + mutex_init(&kvm->arch.config_lock); > + > +#ifdef CONFIG_LOCKDEP > + /* Clue in lockdep that the config_lock must be taken inside kvm->lock */ > + mutex_lock(&kvm->lock); > + mutex_lock(&kvm->arch.config_lock); > + mutex_unlock(&kvm->arch.config_lock); > + mutex_unlock(&kvm->lock); > +#endif > + > ret = kvm_share_hyp(kvm, kvm + 1); > if (ret) > return ret; > @@ -328,6 +338,14 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > > spin_lock_init(&vcpu->arch.mp_state_lock); > > +#ifdef CONFIG_LOCKDEP > + /* Inform lockdep that the config_lock is acquired after vcpu->mutex */ > + mutex_lock(&vcpu->mutex); > + mutex_lock(&vcpu->kvm->arch.config_lock); > + mutex_unlock(&vcpu->kvm->arch.config_lock); > + mutex_unlock(&vcpu->mutex); > +#endif Shouldn't this hunk be moved to the previous patch? > + > /* Force users to call KVM_ARM_VCPU_INIT */ > vcpu->arch.target = -1; > bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > index b874ec6a37c1..ee445a7a1ef8 100644 > --- a/arch/arm64/kvm/reset.c > +++ b/arch/arm64/kvm/reset.c > @@ -205,7 +205,7 @@ static int kvm_set_vm_width(struct kvm_vcpu *vcpu) > > is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); > > - lockdep_assert_held(&kvm->lock); > + lockdep_assert_held(&kvm->arch.config_lock); > > if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) { > /* > @@ -262,9 +262,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > bool loaded; > u32 pstate; > > - mutex_lock(&vcpu->kvm->lock); > + mutex_lock(&vcpu->kvm->arch.config_lock); > ret = kvm_set_vm_width(vcpu); > - mutex_unlock(&vcpu->kvm->lock); > + mutex_unlock(&vcpu->kvm->arch.config_lock); > > if (ret) > return ret; Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel