From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761A3C433F5 for ; Mon, 28 Mar 2022 18:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244984AbiC1SGe (ORCPT ); Mon, 28 Mar 2022 14:06:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237822AbiC1SGa (ORCPT ); Mon, 28 Mar 2022 14:06:30 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B93E74BFFE for ; Mon, 28 Mar 2022 11:04:48 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id z16so13427066pfh.3 for ; Mon, 28 Mar 2022 11:04:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Ym/L1wBpDfLtNFYglb5eik8J+ZADs+Mxo8dMLHDIgV0=; b=JWB0uri4NhVLpGuhzaP6omiJxFh+N6igYL2QbafBZatuK8GtgilikAUUQfcEkJl9wQ 2skQvDSUo+QMjRe4Vq0ipUufHfPSE2C2fzWmL/E7sMtpWPWJFR1FLOsBmdCm9HOHzXEt yayMV6b+Awot34BckY+CuNWx864MNrfv4zC++1Mfxle2jUQHy4bUJTm97tdfZUR4A7dp 9QdBiZK0g7wZ32cr9l81iMisnMhxmxy006GUhGTI9rOJQ1/HcMxwFZZn6DDGsY9E85hG P1FidWr4vXP/LzVnH7wc2TVdmAqGZvpuUuRA89S2j34ES3txBLDpPUa5NJrNlP5VLJfn X5rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Ym/L1wBpDfLtNFYglb5eik8J+ZADs+Mxo8dMLHDIgV0=; b=WCKMNQBy+isq+xVEey9OjkWtkzS4X8pW8uBgx2qqrpGvlqLIt0/10MKfC2exARtFgU PDkMulBOC5u+vom0BgDBcjcTbmRwhqQnkNWyS0XJ1pcBXCpJm907IYIKvvRhPeWs2Vxj Y/3FbiRn8dOmfYj6Cvwj3AfVNaEByz6YJkOcACcQ7qoASbLC/SRojOvsM9dehTMDygCG GlNLT9jOPgbGaefDw+quZp14pmgXr0cK3stp45u/Do7yDPXzdF8vE6nJVHl/fpnUcFNO k2os7CfMqxd/i826z9P1c4dyCX/MohnTuWvO/0TnwRKaIJg539a9yXyHIiDzjDGInek0 rmeg== X-Gm-Message-State: AOAM531QcZPx+UOPF+Quk4mVAUE/ctcaM/HI593LnrTiP3zEbji9DWQ8 3PHIIfRcpPVs8PLWoKaL62wQYw== X-Google-Smtp-Source: ABdhPJz6cmAET3X2l8oi9XiqgexGDczM4h6l+4EdHGvw31KNYFpDUf/5HHI7EuZ0lQttFGrQKlsXbg== X-Received: by 2002:a05:6a00:1496:b0:4fb:34a7:dcce with SMTP id v22-20020a056a00149600b004fb34a7dccemr11419180pfu.70.1648490687914; Mon, 28 Mar 2022 11:04:47 -0700 (PDT) Received: from google.com (254.80.82.34.bc.googleusercontent.com. [34.82.80.254]) by smtp.gmail.com with ESMTPSA id 21-20020a630115000000b00382a0895661sm13558768pgb.11.2022.03.28.11.04.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 11:04:46 -0700 (PDT) Date: Mon, 28 Mar 2022 18:04:43 +0000 From: David Matlack To: Ben Gardon Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Peter Xu , Sean Christopherson , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid Subject: Re: [PATCH v2 6/9] KVM: x86/mmu: Factor out part of vmx_get_mt_mask which does not depend on vcpu Message-ID: References: <20220321224358.1305530-1-bgardon@google.com> <20220321224358.1305530-7-bgardon@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220321224358.1305530-7-bgardon@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 21, 2022 at 03:43:55PM -0700, Ben Gardon wrote: > Factor out the parts of vmx_get_mt_mask which do not depend on the vCPU > argument. This also requires adding some error reporting to the helper > function to say whether it was possible to generate the MT mask without > a vCPU argument. This refactoring will allow the MT mask to be computed > when noncoherent DMA is not enabled on a VM. We could probably make vmx_get_mt_mask() entirely independent of the kvm_vcpu, but it would take more work. For MTRRs, the guest must update them on all CPUs at once (SDM 11.11.8) so we could just cache vCPU 0's MTRRs at the VM level and use that here. (From my experience, Intel CPUs implement MTRRs at the core level. Properly emulating that would require a different EPT table for every virtual core.) For CR0.CD, I'm not exactly sure what the semantics are for MP systems but I can't imagine it's valid for software to configure CR0.CD differently on different cores. I would have to scoure the SDM closely to confirm, but we could probably do something like cache max(CR0.CD for all vCPUs) at the VM level and use that to indicate if caching is disabled. > > No functional change intended. > > > Signed-off-by: Ben Gardon > --- > arch/x86/kvm/vmx/vmx.c | 24 +++++++++++++++++++----- > 1 file changed, 19 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index e8963f5af618..69c654567475 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7149,9 +7149,26 @@ static int __init vmx_check_processor_compat(void) > return 0; > } > > +static bool vmx_try_get_mt_mask(struct kvm *kvm, gfn_t gfn, > + bool is_mmio, u64 *mask) > +{ > + if (is_mmio) { > + *mask = MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; > + return true; > + } > + > + if (!kvm_arch_has_noncoherent_dma(kvm)) { > + *mask = (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > + return true; > + } > + > + return false; > +} > + > static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > { > u8 cache; > + u64 mask; > > /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in > * memory aliases with conflicting memory types and sometimes MCEs. > @@ -7171,11 +7188,8 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > * EPT memory type is used to emulate guest CD/MTRR. > */ > > - if (is_mmio) > - return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; > - > - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) > - return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > + if (vmx_try_get_mt_mask(vcpu->kvm, gfn, is_mmio, &mask)) > + return mask; > > if (kvm_read_cr0(vcpu) & X86_CR0_CD) { > if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) > -- > 2.35.1.894.gb6a874cedc-goog >