From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F7B4C433EF for ; Mon, 10 Jan 2022 10:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6ALMv1n2BDeJinuORf0dEyb5Tan9DAFJmQdTUG4TMdY=; b=3n3jTTA5KCVsbQ wVURt6Z1+kHNmEgy8G4/JLNieDd7NMJcaBlK59h9mgLsgl4NKzR8Oq/Q2BRoqVnaM8RJn6Z4fMVWL m1C3lDpYJre9yzpKSuiDik9Qbs314cKsiQEjzdjmYGlKZPFtUeUiavfanVEuOTxigPiJXD3TvcmvT R+Fd0bToukzhI5M9ZYDtbhNIRup9V0kBuilbK+qzjhdSDc1lPJhx3/ywTekFK9wBbdT9sLe0z8WGg 43JueMJk/xqxsIv6nkBTbeQ24OQFDqhuLLbfM/EvT1RnYbd3xO2NRrmVDzQo2jNy0VZzYuX0Dxk11 y7ZLO6cvC0uxUZ4wp8gQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6sEF-00AueS-O0; Mon, 10 Jan 2022 10:48:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n6sEA-00Aube-9G for linux-arm-kernel@lists.infradead.org; Mon, 10 Jan 2022 10:48:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 44BDEED1; Mon, 10 Jan 2022 02:48:35 -0800 (PST) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 75DFC3F5A1; Mon, 10 Jan 2022 02:48:32 -0800 (PST) Date: Mon, 10 Jan 2022 10:48:41 +0000 From: Alexandru Elisei To: Reiji Watanabe Cc: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata Subject: Re: [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Message-ID: References: <20220110054042.1079932-1-reijiw@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220110054042.1079932-1-reijiw@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220110_024838_402875_326CB1A8 X-CRM114-Status: GOOD ( 24.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Reiji, On Sun, Jan 09, 2022 at 09:40:41PM -0800, Reiji Watanabe wrote: > vcpu_allowed_register_width() checks if all the VCPUs are either > all 32bit or all 64bit. Since the checking is done even for vCPUs > that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet, > the non-initialized vCPUs are erroneously treated as 64bit vCPU, > which causes the function to incorrectly detect a mixed-width VM. > > Fix vcpu_allowed_register_width() to skip the check for vCPUs that > are not initialized yet. > > Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") > Signed-off-by: Reiji Watanabe > --- > arch/arm64/kvm/reset.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > index 426bd7fbc3fd..ef78bbc7566a 100644 > --- a/arch/arm64/kvm/reset.c > +++ b/arch/arm64/kvm/reset.c > @@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) > if (kvm_has_mte(vcpu->kvm) && is32bit) > return false; > > + /* > + * Make sure vcpu->arch.target setting is visible from others so > + * that the width consistency checking between two vCPUs is done > + * by at least one of them at KVM_ARM_VCPU_INIT. > + */ > + smp_mb(); >From ARM DDI 0487G.a, page B2-146 ("Data Memory Barrier (DMB)"): "The DMB instruction is a memory barrier instruction that ensures the relative order of memory accesses before the barrier with memory accesses after the barrier." I'm going to assume from the comment that you are referring to completion of memory accesses ("Make sure [..] is visible from others"). Please correct me if I am wrong. In this case, DMB ensures ordering of memory accesses with regards to writes and reads, not *completion*. Have a look at tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus for the classic message passing example as an example of memory ordering. Message passing and other patterns are also explained in ARM DDI 0487G.a, page K11-8363. I'm not saying that your approach is incorrect, but the commit message should explain what memory accesses are being ordered relative to each other and why. Thanks, Alex > + > /* Check that the vcpus are either all 32bit or all 64bit */ > kvm_for_each_vcpu(i, tmp, vcpu->kvm) { > + /* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */ > + if (tmp->arch.target == -1) > + continue; > + > if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit) > return false; > } > > base-commit: df0cc57e057f18e44dac8e6c18aba47ab53202f9 > -- > 2.34.1.575.g55b058a8bb-goog > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel