From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7806BC77B73 for ; Thu, 20 Apr 2023 08:16:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234384AbjDTIQt (ORCPT ); Thu, 20 Apr 2023 04:16:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234382AbjDTIQs (ORCPT ); Thu, 20 Apr 2023 04:16:48 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAC382717 for ; Thu, 20 Apr 2023 01:16:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 86196645DA for ; Thu, 20 Apr 2023 08:16:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5D6BC4339B; Thu, 20 Apr 2023 08:16:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681978605; bh=U+HZeh6wvku56TBypNZ3A51f1b1PWxa7cyoVhE23PM0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=iC55ZgMwXjpkeMH1Kgbu9p2I0RjxU2IfkEdzKL9TknHBEcbGugCVRQ6u6bq/wpj86 Q8sk797JWGiuws9I1bKhx+5rN9dWr5cbbbFaXQiuptlQi+nmDILuEp1GikK11lkywI 2Xc7QrDkfcbk2BbPny8Jfr2z7mYbVPeqAGuNPhq8SYKkL1G0xIj2657VAGAtEghqmW aqVbIv34h3KNbT4hQZ2uAdV9KnVz6SxOzzfMUJucQg1+rApZQ/+53DrKFlPDsmHd/b 8lXCkgQkDLeYhjY4Gp7JsgyP4XYiFGOm8YU7AGL0nfvoy0mI1XDRw2CtIHrzQ6QmmS Z06HUcOpYa6tg== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1ppPT9-009nIu-HC; Thu, 20 Apr 2023 09:16:43 +0100 Date: Thu, 20 Apr 2023 09:16:43 +0100 Message-ID: <86y1mnj7dg.wl-maz@kernel.org> From: Marc Zyngier To: Reiji Watanabe Cc: Oliver Upton , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon Subject: Re: [PATCH v1 1/2] KVM: arm64: Acquire mp_state_lock in kvm_arch_vcpu_ioctl_vcpu_init() In-Reply-To: <20230420021302.iyl3pqo3lg6lpabv@google.com> References: <20230419021852.2981107-1-reijiw@google.com> <20230419021852.2981107-2-reijiw@google.com> <87cz405or6.wl-maz@kernel.org> <20230420021302.iyl3pqo3lg6lpabv@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: reijiw@google.com, oliver.upton@linux.dev, kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, alexandru.elisei@arm.com, yuzenghui@huawei.com, suzuki.poulose@arm.com, pbonzini@redhat.com, ricarkol@google.com, jingzhangos@google.com, rananta@google.com, will@kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Thu, 20 Apr 2023 03:13:02 +0100, Reiji Watanabe wrote: > > Hi Marc, > > On Wed, Apr 19, 2023 at 08:12:45AM +0100, Marc Zyngier wrote: > > On Wed, 19 Apr 2023 03:18:51 +0100, > > Reiji Watanabe wrote: > > > kvm_arch_vcpu_ioctl_vcpu_init() doesn't acquire mp_state_lock > > > when setting the mp_state to KVM_MP_STATE_RUNNABLE. Fix the > > > code to acquire the lock. > > > > > > Signed-off-by: Reiji Watanabe > > > --- > > > arch/arm64/kvm/arm.c | 5 ++++- > > > 1 file changed, 4 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > > index fbafcbbcc463..388aa4f18f21 100644 > > > --- a/arch/arm64/kvm/arm.c > > > +++ b/arch/arm64/kvm/arm.c > > > @@ -1244,8 +1244,11 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, > > > */ > > > if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features)) > > > kvm_arm_vcpu_power_off(vcpu); > > > - else > > > + else { > > > + spin_lock(&vcpu->arch.mp_state_lock); > > > WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE); > > > + spin_unlock(&vcpu->arch.mp_state_lock); > > > + } > > > > > > return 0; > > > } > > > > I'm not entirely convinced that this fixes anything. What does the > > lock hazard against given that the write is atomic? But maybe a > > It appears that kvm_psci_vcpu_on() expects the vCPU's mp_state > to not be changed by holding the lock. Although I don't think this > code practically causes any real issues now, I am a little concerned > about leaving one instance that updates mpstate without acquiring the > lock, in terms of future maintenance, as holding the lock won't prevent > mp_state from being updated. > > What do you think ? Right, fair enough. It is probably better to take the lock and not have to think of this sort of things... I'm becoming more lazy by the minute! > > > slightly more readable of this would be to expand the critical section > > this way: > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index 4ec888fdd4f7..bb21d0c25de7 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -1246,11 +1246,15 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, > > /* > > * Handle the "start in power-off" case. > > */ > > + spin_lock(&vcpu->arch.mp_state_lock); > > + > > if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features)) > > - kvm_arm_vcpu_power_off(vcpu); > > + __kvm_arm_vcpu_power_off(vcpu); > > else > > WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE); > > > > + spin_unlock(&vcpu->arch.mp_state_lock); > > + > > return 0; > > } > > > > Thoughts? > > Yes, it looks better! Cool. I've applied this change to your patch, applied the series to the lock inversion branch, and remerged the branch in -next. We're getting there! ;-) M. -- Without deviation from the norm, progress is not possible.