From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:57508 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730771AbeHNUYt (ORCPT ); Tue, 14 Aug 2018 16:24:49 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Konrad Rzeszutek Wilk , Thomas Gleixner Subject: [PATCH 4.14 065/104] x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required Date: Tue, 14 Aug 2018 19:17:19 +0200 Message-Id: <20180814171519.529392899@linuxfoundation.org> In-Reply-To: <20180814171515.270692185@linuxfoundation.org> References: <20180814171515.270692185@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Konrad Rzeszutek Wilk commit 390d975e0c4e60ce70d4157e0dd91ede37824603 upstream If the L1D flush module parameter is set to 'always' and the IA32_FLUSH_CMD MSR is available, optimize the VMENTER code with the MSR save list. Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx.c | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-) --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -5714,6 +5714,16 @@ static void ept_set_mmio_spte_mask(void) VMX_EPT_MISCONFIG_WX_VALUE); } +static bool vmx_l1d_use_msr_save_list(void) +{ + if (!enable_ept || !boot_cpu_has_bug(X86_BUG_L1TF) || + static_cpu_has(X86_FEATURE_HYPERVISOR) || + !static_cpu_has(X86_FEATURE_FLUSH_L1D)) + return false; + + return vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS; +} + #define VMX_XSS_EXIT_BITMAP 0 /* * Sets up the vmcs for emulated real mode. @@ -6061,6 +6071,12 @@ static void vmx_set_nmi_mask(struct kvm_ vmcs_clear_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI); } + /* + * If flushing the L1D cache on every VMENTER is enforced and the + * MSR is available, use the MSR save list. + */ + if (vmx_l1d_use_msr_save_list()) + add_atomic_switch_msr(vmx, MSR_IA32_FLUSH_CMD, L1D_FLUSH, 0, true); } static int vmx_nmi_allowed(struct kvm_vcpu *vcpu) @@ -9082,11 +9098,26 @@ static void vmx_l1d_flush(struct kvm_vcp bool always; /* - * If the mitigation mode is 'flush always', keep the flush bit - * set, otherwise clear it. It gets set again either from - * vcpu_run() or from one of the unsafe VMEXIT handlers. + * This code is only executed when: + * - the flush mode is 'cond' + * - the flush mode is 'always' and the flush MSR is not + * available + * + * If the CPU has the flush MSR then clear the flush bit because + * 'always' mode is handled via the MSR save list. + * + * If the MSR is not avaibable then act depending on the mitigation + * mode: If 'flush always', keep the flush bit set, otherwise clear + * it. + * + * The flush bit gets set again either from vcpu_run() or from one + * of the unsafe VMEXIT handlers. */ - always = vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS; + if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) + always = false; + else + always = vmentry_l1d_flush == VMENTER_L1D_FLUSH_ALWAYS; + vcpu->arch.l1tf_flush_l1d = always; vcpu->stat.l1d_flush++; @@ -12503,7 +12534,8 @@ static int __init vmx_setup_l1d_flush(vo struct page *page; if (vmentry_l1d_flush == VMENTER_L1D_FLUSH_NEVER || - !boot_cpu_has_bug(X86_BUG_L1TF)) + !boot_cpu_has_bug(X86_BUG_L1TF) || + vmx_l1d_use_msr_save_list()) return 0; if (!boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {