From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC0DFC38142 for ; Fri, 27 Jan 2023 22:11:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232598AbjA0WL1 (ORCPT ); Fri, 27 Jan 2023 17:11:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229696AbjA0WL0 (ORCPT ); Fri, 27 Jan 2023 17:11:26 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8307480155 for ; Fri, 27 Jan 2023 14:11:18 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id z1-20020a17090a66c100b00226f05b9595so6062742pjl.0 for ; Fri, 27 Jan 2023 14:11:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=r60cFDeeS84qWFf+ONIXpZq61TeOlJUzT41ci/Cxlkc=; b=JPy3YXyPUXW9Mkdgv8PQw5yg/Zcplc4btlevXYcriy3XpsS8ltKODcDR9nS33O9bGS hi5oxPekw5K7FlDyEGmhF01xjIJ94sIo1KwNNLEB/2nIMLOCoPaolyZaKu6+sT2LltAW kmIKrj5ZU4JNTO7jNJvixpfWkeUz2AldgkgVZ/7GJGHT6gvI0gm61zfqMjt11tscwpwL ztemOmkzkW/w7408Cq6AqAI4HkZsjRml7o/dAj8QMZZ2RPHRCQeeUqem5jAj0PHWB5cu PFYqFXZ9FrwV4Ghm0LrJ8+cE7ujW2B2WDOA5+/T3WG8FrtskoJEFYjxNq+E5Z17JqNoA Xc0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=r60cFDeeS84qWFf+ONIXpZq61TeOlJUzT41ci/Cxlkc=; b=l/CuMN1yF15prB5k9QdiZXLEGZZGnNvhVYh1RrrZSVDHPkAcmp9rfbpu464ADiL7Zr H6udN5h5KNrKZJ1H1azrzPFC5+0rZ0W9Cl0juNzZpAaqz/JpHAFPlJc8+B+o+8y3gmJA f7AmOdcfALUWUdF2mYKmjj2JO0igBAtUcvGZ8wV9t55tpkuinVwHIQcFbr3WqMhzAHSt rHiEUDxnpF5bBsYb7KQnnkLWcH2uKEYZ56ety/9TONd+7JfhNWoVAHmHC8qC5fBrDjuY 9gdujsCZD1iIZr+fwkWKBOOxrweqRj3S93mxggCs488n4Jwk7/gzvYv72gQrqmStNzaJ c66A== X-Gm-Message-State: AO0yUKX2smzlX8oXmt1i8aPvG/JdC3Z3qXkapL/6vrf0pqAaIx3rzzTk mm1wPX7R8ebLpv+gzfP2+hIZ4g== X-Google-Smtp-Source: AK7set+2NrsKnDTdOf0kvXoIk5BVDivfmXy8eEHLP0Rh0s1ujxRfC4EdrIaq8cH0R9EDL87F18d3pQ== X-Received: by 2002:a17:90a:4f07:b0:229:f4e9:75c7 with SMTP id p7-20020a17090a4f0700b00229f4e975c7mr166879pjh.0.1674857477846; Fri, 27 Jan 2023 14:11:17 -0800 (PST) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id iz14-20020a170902ef8e00b0017f73caf588sm3324237plb.218.2023.01.27.14.11.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 14:11:16 -0800 (PST) Date: Fri, 27 Jan 2023 22:11:13 +0000 From: Sean Christopherson To: Yang Weijiang Cc: pbonzini@redhat.com, jmattson@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, like.xu.linux@gmail.com, kan.liang@linux.intel.com, wei.w.wang@intel.com Subject: Re: [PATCH v2 13/15] KVM: x86/vmx: Save/Restore guest Arch LBR Ctrl msr at SMM entry/exit Message-ID: References: <20221125040604.5051-1-weijiang.yang@intel.com> <20221125040604.5051-14-weijiang.yang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221125040604.5051-14-weijiang.yang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 24, 2022, Yang Weijiang wrote: > Per SDM 3B Chapter 18: "IA32_LBR_CTL.LBREn is saved and cleared on #SMI, > and restored on RSM", store guest IA32_LBR_CTL in SMRAM and clear LBREn > in VMCS at SMM entry, and do reverse things at SMM exit. > > Suggested-by: Paolo Bonzini > Signed-off-by: Yang Weijiang > --- > arch/x86/kvm/smm.c | 1 + > arch/x86/kvm/smm.h | 3 ++- > arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++++++ > 3 files changed, 25 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c > index a9c1c2af8d94..5987090b440f 100644 > --- a/arch/x86/kvm/smm.c > +++ b/arch/x86/kvm/smm.c > @@ -86,6 +86,7 @@ static void check_smram_offsets(void) > CHECK_SMRAM64_OFFSET(smm_revison, 0xFEFC); > CHECK_SMRAM64_OFFSET(smbase, 0xFF00); > CHECK_SMRAM64_OFFSET(reserved4, 0xFF04); > + CHECK_SMRAM64_OFFSET(arch_lbr_ctl, 0xFF10); > CHECK_SMRAM64_OFFSET(ssp, 0xFF18); > CHECK_SMRAM64_OFFSET(svm_guest_pat, 0xFF20); > CHECK_SMRAM64_OFFSET(svm_host_efer, 0xFF28); > diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h > index a1cf2ac5bd78..5a6479205d91 100644 > --- a/arch/x86/kvm/smm.h > +++ b/arch/x86/kvm/smm.h > @@ -114,7 +114,8 @@ struct kvm_smram_state_64 { > u32 reserved3[3]; > u32 smm_revison; > u32 smbase; > - u32 reserved4[5]; > + u32 reserved4[3]; > + u64 arch_lbr_ctl; > > /* ssp and svm_* fields below are not implemented by KVM */ > u64 ssp; > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 6ad765ea4059..cc782233c075 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -8006,11 +8006,21 @@ static int vmx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) > vmx->nested.smm.vmxon = vmx->nested.vmxon; > vmx->nested.vmxon = false; > vmx_clear_hlt(vcpu); > + > + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) && > + guest_cpuid_has(vcpu, X86_FEATURE_LM)) { Uh, so this arbitrary dependency on 64-bit vCPUs needs to be factored into the enabling. And KVM should WARN if arch LBRs get enabled for a 32-bit vCPU. > + u64 ctl = vmcs_read64(GUEST_IA32_LBR_CTL); > + > + smram->smram64.arch_lbr_ctl = ctl; > + vmcs_write64(GUEST_IA32_LBR_CTL, ctl & ~ARCH_LBR_CTL_LBREN); > + } > + > return 0; > } > > static int vmx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) > { > + struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu); > struct vcpu_vmx *vmx = to_vmx(vcpu); > int ret; > > @@ -8027,6 +8037,18 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) > vmx->nested.nested_run_pending = 1; > vmx->nested.smm.guest_mode = false; > } > + > + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) && > + guest_cpuid_has(vcpu, X86_FEATURE_LM)) { > + u64 ctl = smram->smram64.arch_lbr_ctl; > + > + vmcs_write64(GUEST_IA32_LBR_CTL, ctl & ARCH_LBR_CTL_LBREN); IIUC, this should set only LBREn and preserve all other bits, not clobber the entire MSR. > + > + if (intel_pmu_lbr_is_enabled(vcpu) && > + (ctl & ARCH_LBR_CTL_LBREN) && !lbr_desc->event) > + intel_pmu_create_guest_lbr_event(vcpu); > + } > + > return 0; > } > > -- > 2.27.0 >