From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45F21C433F5 for ; Mon, 14 Mar 2022 20:23:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8asdLORYkN6umGkydsWjgt4Y80W0/TC5k+8MDeuyMTY=; b=gfigeZ4Cd0rrX/ zRDmjqM7vGKae+qoXc/AT7UAxT1NDWQMYrue4rzdOrevJIyuk4W1VG2Ely28awfaWXfZwUBegna2C IUUwKmxnnrCwvM/uaHyq86CKy4EODwMv6hPEYf8ShY4wJj9skrWnYvJhm9CGXyBkNn3CvfNR+CxsQ Hq06arHZT88PwJGdDGpITUqAG7hCc3mOyrH7UHmb0dQ9K/2CBZxT1I8h3DsZbMf2kQjeMvVfUsIoW n5SO78SY4dZVowBMP8RizC0g/fDZf4OCXh8/gq+5QCLuwi++r8Ykuqv9HDqXDrAYw5+irIUR8yNTq ijbykR1Zy2dNX8Hj6guA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTrCr-006mEZ-1K; Mon, 14 Mar 2022 20:22:17 +0000 Received: from mail-il1-x129.google.com ([2607:f8b0:4864:20::129]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTrCm-006mE2-M7 for linux-arm-kernel@lists.infradead.org; Mon, 14 Mar 2022 20:22:14 +0000 Received: by mail-il1-x129.google.com with SMTP id d3so11889947ilr.10 for ; Mon, 14 Mar 2022 13:22:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=2Bt6+Ca2x4nxRfIi5WYw88o7wk+blAKPH7fbDEP/s6U=; b=gNIjYO84h+WERe6rT3JV1WoytsmwcN2lH6DMwhce7wnNKPY4bzcCzqNCR1k7QiUnDt 3wZjOwUbLwCJa5hwke3SdeYA0AB2f2Q+11WzlKgxEOIBmKvVWd/UEyXQzpCvCRxjNRVJ P0ZcRGH2QOHwCTfzJ4JDRhxSLJJLPzlg8GW4so/Ep/+MsmKBHClHAeZRuR4Wpfkl1teA wVZTjDXXbcMUMWFb1/Ya5bkDGXBuq6JDRHmDlHbk+t285Q4IYvK54Cat9yHSdUOY6+Jg wNCwJ3S81y3OGFCYAMobQieLpF01k3SIjCdc92rrXwZiL6tnC6d39gB4ykJh9H5nGUDF scSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=2Bt6+Ca2x4nxRfIi5WYw88o7wk+blAKPH7fbDEP/s6U=; b=OXv8jzdSZtjxA9epmxnNSOY7GBHNsf20q3bdIxbIByG40AOpScZRC3nzuox7eOB4U+ GwXiUwGv7cDlohd5kcNWZT17FrjEmR+CNiaQuqtQqt46C0ybo7ysFMAwgUki4gaQItBo t6BDDCRFCM/8j2Bk748tTLxhxzn1p/WkMRPI0sfR1jIIYbITGhtc0JZB70hDzH3pr5M6 lDNyLyOWYOqFf8AUSFGiDTlmncDB+bPIVTo/DXAb4wLfEekya0byJeZoIVR+nuiHti0s Qm6I1DJabx1twXvCGtHsazGQctmDUD7b7y1x7CclytFhUQ+z6ZOX/bFPr6GwFGQf1QsN 1WOA== X-Gm-Message-State: AOAM5315Iec3txaKDkOKzS5EDFhZoWj+EE8jDhKNV1DMg8rJDC0jTvCk KywvYV+iI9ViEehsDx5FSmSoEw== X-Google-Smtp-Source: ABdhPJwhm/TaO7FFg/+K/rjpb9mtXjkN554TejXnnYG/4s4LMcqQATZvLzU4jt2heXeJlc3hSmvw5Q== X-Received: by 2002:a92:c088:0:b0:2c7:9421:3c7b with SMTP id h8-20020a92c088000000b002c794213c7bmr9181438ile.280.1647289331417; Mon, 14 Mar 2022 13:22:11 -0700 (PDT) Received: from google.com (194.225.68.34.bc.googleusercontent.com. [34.68.225.194]) by smtp.gmail.com with ESMTPSA id p22-20020a6bce16000000b00640922b3699sm8771766iob.15.2022.03.14.13.22.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Mar 2022 13:22:10 -0700 (PDT) Date: Mon, 14 Mar 2022 20:22:07 +0000 From: Oliver Upton To: Reiji Watanabe Cc: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata Subject: Re: [PATCH v4 2/3] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Message-ID: References: <20220314061959.3349716-1-reijiw@google.com> <20220314061959.3349716-3-reijiw@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220314061959.3349716-3-reijiw@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220314_132212_756107_B72A4F5D X-CRM114-Status: GOOD ( 43.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sun, Mar 13, 2022 at 11:19:58PM -0700, Reiji Watanabe wrote: > KVM allows userspace to configure either all EL1 32bit or 64bit vCPUs > for a guest. At vCPU reset, vcpu_allowed_register_width() checks > if the vcpu's register width is consistent with all other vCPUs'. > Since the checking is done even against vCPUs that are not initialized > (KVM_ARM_VCPU_INIT has not been done) yet, the uninitialized vCPUs > are erroneously treated as 64bit vCPU, which causes the function to > incorrectly detect a mixed-width VM. > > Introduce KVM_ARCH_FLAG_EL1_32BIT and KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED > bits for kvm->arch.flags. A value of the EL1_32BIT bit indicates that > the guest needs to be configured with all 32bit or 64bit vCPUs, and > a value of the REG_WIDTH_CONFIGURED bit indicates if a value of the > EL1_32BIT bit is valid (already set up). Values in those bits are set at > the first KVM_ARM_VCPU_INIT for the guest based on KVM_ARM_VCPU_EL1_32BIT > configuration for the vCPU. > > Check vcpu's register width against those new bits at the vcpu's > KVM_ARM_VCPU_INIT (instead of against other vCPUs' register width). > > Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") > Signed-off-by: Reiji Watanabe Hrmph... I hate to be asking this question so late in the game, but... Are there any bits that we really allow variation per-vCPU besides KVM_ARM_VCPU_POWER_OFF? We unintentionally allow for variance with the KVM_ARM_VCPU_PSCI_0_2 bit even though that's complete nonsense. Stated plainly, should we just deny any attempts at asymmetry besides POWER_OFF? Besides the nits, I see nothing objectionable with the patch. I'd really like to see more generalized constraints on vCPU configuration, but if this is the route we take: Reviewed-by: Oliver Upton > --- > arch/arm64/include/asm/kvm_emulate.h | 27 ++++++++---- > arch/arm64/include/asm/kvm_host.h | 9 ++++ > arch/arm64/kvm/reset.c | 64 ++++++++++++++++++---------- > 3 files changed, 70 insertions(+), 30 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h > index d62405ce3e6d..7496deab025a 100644 > --- a/arch/arm64/include/asm/kvm_emulate.h > +++ b/arch/arm64/include/asm/kvm_emulate.h > @@ -43,10 +43,22 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); > > void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); > > +#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__) > static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) > { > return !(vcpu->arch.hcr_el2 & HCR_RW); > } > +#else > +static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) > +{ > + struct kvm *kvm = vcpu->kvm; > + > + WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, > + &kvm->arch.flags)); > + > + return test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); > +} > +#endif > > static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) > { > @@ -72,15 +84,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) > vcpu->arch.hcr_el2 |= HCR_TVM; > } > > - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) > + if (vcpu_el1_is_32bit(vcpu)) > vcpu->arch.hcr_el2 &= ~HCR_RW; > - > - /* > - * TID3: trap feature register accesses that we virtualise. > - * For now this is conditional, since no AArch32 feature regs > - * are currently virtualised. > - */ > - if (!vcpu_el1_is_32bit(vcpu)) > + else > + /* > + * TID3: trap feature register accesses that we virtualise. > + * For now this is conditional, since no AArch32 feature regs > + * are currently virtualised. > + */ > vcpu->arch.hcr_el2 |= HCR_TID3; > > if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 11a7ae747ded..22ad977069f5 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -125,6 +125,15 @@ struct kvm_arch { > #define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0 > /* Memory Tagging Extension enabled for the guest */ > #define KVM_ARCH_FLAG_MTE_ENABLED 1 > + /* > + * The following two bits are used to indicate the guest's EL1 > + * register width configuration. A value of KVM_ARCH_FLAG_EL1_32BIT > + * bit is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED is set. > + * Otherwise, the guest's EL1 register width has not yet been > + * determined yet. > + */ > +#define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED 2 > +#define KVM_ARCH_FLAG_EL1_32BIT 3 > unsigned long flags; > > /* > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > index ecc40c8cd6f6..cbeb6216ee25 100644 > --- a/arch/arm64/kvm/reset.c > +++ b/arch/arm64/kvm/reset.c > @@ -181,27 +181,45 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) > return 0; > } > > -static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) > +/* > + * A guest can have either all EL1 32bit or 64bit vcpus only. It is > + * indicated by a value of KVM_ARCH_FLAG_EL1_32BIT bit in kvm->arch.flags, > + * which is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED in > + * kvm->arch.flags is set. > + * This function sets the EL1_32BIT bit based on the given @is32bit (and > + * sets REG_WIDTH_CONFIGURED bit). When those flags are already set, > + * @is32bit must be consistent with the flags. > + * Returns 0 on success, or non-zero otherwise. > + */ nit: use kerneldoc style: https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html > +static int kvm_set_vm_width(struct kvm *kvm, bool is32bit) > { > - struct kvm_vcpu *tmp; > - bool is32bit; > - unsigned long i; > + bool allowed; > + > + lockdep_assert_held(&kvm->lock); > + > + if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) { > + /* > + * The guest's register width is already configured. > + * Make sure that @is32bit is consistent with it. > + */ > + allowed = (is32bit == > + test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags)); > + return allowed ? 0 : -EINVAL; nit: I'd avoid the ternary and just use a boring if/else (though I could be in the minority here). > + } > > - is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); > if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit) > - return false; > + return -EINVAL; > > /* MTE is incompatible with AArch32 */ > - if (kvm_has_mte(vcpu->kvm) && is32bit) > - return false; > + if (kvm_has_mte(kvm) && is32bit) > + return -EINVAL; > > - /* Check that the vcpus are either all 32bit or all 64bit */ > - kvm_for_each_vcpu(i, tmp, vcpu->kvm) { > - if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit) > - return false; > - } > + if (is32bit) > + set_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags); > > - return true; > + set_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags); > + > + return 0; > } > > /** > @@ -230,10 +248,17 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > u32 pstate; > > mutex_lock(&vcpu->kvm->lock); > - reset_state = vcpu->arch.reset_state; > - WRITE_ONCE(vcpu->arch.reset_state.reset, false); > + ret = kvm_set_vm_width(vcpu->kvm, > + vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)); > + if (!ret) { > + reset_state = vcpu->arch.reset_state; > + WRITE_ONCE(vcpu->arch.reset_state.reset, false); > + } > mutex_unlock(&vcpu->kvm->lock); > > + if (ret) > + return ret; > + > /* Reset PMU outside of the non-preemptible section */ > kvm_pmu_vcpu_reset(vcpu); > > @@ -260,14 +285,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > } > } > > - if (!vcpu_allowed_register_width(vcpu)) { > - ret = -EINVAL; > - goto out; > - } > - > switch (vcpu->arch.target) { > default: > - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { > + if (vcpu_el1_is_32bit(vcpu)) { > pstate = VCPU_RESET_PSTATE_SVC; > } else { > pstate = VCPU_RESET_PSTATE_EL1; > -- > 2.35.1.723.g4982287a31-goog > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel