From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from userp2130.oracle.com ([156.151.31.86]:59428 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933139AbeBVRHg (ORCPT ); Thu, 22 Feb 2018 12:07:36 -0500 Date: Thu, 22 Feb 2018 12:07:17 -0500 From: Konrad Rzeszutek Wilk To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, x86@kernel.org, Radim =?utf-8?B?S3LEjW3DocWZ?= , KarimAllah Ahmed , David Woodhouse , Jim Mattson , Thomas Gleixner , Ingo Molnar , stable@vger.kernel.org Subject: Re: [PATCH 1/3] KVM: x86: use native MSR ops for SPEC_CTRL Message-ID: <20180222170717.GP31483@char.us.oracle.com> References: <1519249297-73718-1-git-send-email-pbonzini@redhat.com> <1519249297-73718-2-git-send-email-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <1519249297-73718-2-git-send-email-pbonzini@redhat.com> Content-Transfer-Encoding: quoted-printable Sender: stable-owner@vger.kernel.org List-ID: On Wed, Feb 21, 2018 at 10:41:35PM +0100, Paolo Bonzini wrote: > Having a paravirt indirect call in the IBRS restore path is not a > good idea, since we are trying to protect from speculative execution > of bogus indirect branch targets. It is also slower, so use > native_wrmsrl on the vmentry path too. But it gets replaced during patching. As in once the machine boots the assembler changes from: callq *0xfffflbah to wrmsr ? I don't think you need this patch. >=20 > Fixes: d28b387fb74da95d69d2615732f50cceb38e9a4d > Cc: x86@kernel.org > Cc: Radim Kr=C4=8Dm=C3=A1=C5=99 > Cc: KarimAllah Ahmed > Cc: David Woodhouse > Cc: Jim Mattson > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: stable@vger.kernel.org > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/svm.c | 7 ++++--- > arch/x86/kvm/vmx.c | 7 ++++--- > 2 files changed, 8 insertions(+), 6 deletions(-) >=20 > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index b3e488a74828..1598beeda11c 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -49,6 +49,7 @@ > #include > #include > #include > +#include > #include > =20 > #include > @@ -5355,7 +5356,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > * being speculatively taken. > */ > if (svm->spec_ctrl) > - wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); > + native_wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); > =20 > asm volatile ( > "push %%" _ASM_BP "; \n\t" > @@ -5465,10 +5466,10 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > * save it. > */ > if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)) > - rdmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); > + svm->spec_ctrl =3D native_read_msr(MSR_IA32_SPEC_CTRL); > =20 > if (svm->spec_ctrl) > - wrmsrl(MSR_IA32_SPEC_CTRL, 0); > + native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); > =20 > /* Eliminate branch target predictions from guest mode */ > vmexit_fill_RSB(); > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 67b028d8e726..5caeb8dc5bda 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -51,6 +51,7 @@ > #include > #include > #include > +#include > #include > =20 > #include "trace.h" > @@ -9453,7 +9454,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcp= u *vcpu) > * being speculatively taken. > */ > if (vmx->spec_ctrl) > - wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > + native_wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > =20 > vmx->__launched =3D vmx->loaded_vmcs->launched; > asm( > @@ -9589,10 +9590,10 @@ static void __noclone vmx_vcpu_run(struct kvm_v= cpu *vcpu) > * save it. > */ > if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)) > - rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > + vmx->spec_ctrl =3D native_read_msr(MSR_IA32_SPEC_CTRL); > =20 > if (vmx->spec_ctrl) > - wrmsrl(MSR_IA32_SPEC_CTRL, 0); > + native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); > =20 > /* Eliminate branch target predictions from guest mode */ > vmexit_fill_RSB(); > --=20 > 1.8.3.1 >=20 >=20