From mboxrd@z Thu Jan 1 00:00:00 1970 From: Liran Alon Subject: Re: [RFC 02/10] x86/kvm: Add IBPB support Date: Sat, 20 Jan 2018 12:28:37 -0800 (PST) Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: , , , , , , , , , , , , , , , , , , , , , , , , , , , , To: Return-path: Received: from aserp2130.oracle.com ([141.146.126.79]:47796 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756615AbeATU36 (ORCPT ); Sat, 20 Jan 2018 15:29:58 -0500 Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: ----- karahmed@amazon.de wrote: > From: Ashok Raj >=20 > Add MSR passthrough for MSR_IA32_PRED_CMD and place branch predictor > barriers on switching between VMs to avoid inter VM specte-v2 > attacks. >=20 > [peterz: rebase and changelog rewrite] > [dwmw2: fixes] > [karahmed: - vmx: expose PRED_CMD whenever it is available > =09 - svm: only pass through IBPB if it is available] >=20 > Cc: Asit Mallick > Cc: Dave Hansen > Cc: Arjan Van De Ven > Cc: Tim Chen > Cc: Linus Torvalds > Cc: Andrea Arcangeli > Cc: Andi Kleen > Cc: Thomas Gleixner > Cc: Dan Williams > Cc: Jun Nakajima > Cc: Andy Lutomirski > Cc: Greg KH > Cc: David Woodhouse > Cc: Paolo Bonzini > Signed-off-by: Ashok Raj > Signed-off-by: Peter Zijlstra (Intel) > Link: > https://urldefense.proofpoint.com/v2/url?u=3Dhttp-3A__lkml.kernel.org_r_1= 515720739-2D43819-2D6-2Dgit-2Dsend-2Demail-2Dashok.raj-40intel.com&d=3DDwIB= aQ&c=3DRoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=3DJk6Q8nNzkQ6LJ6g42qAR= kg6ryIDGQr-yKXPNGZbpTx0&m=3D0Y6G18aD_Uxu0jx8J4AbITM-rlF_AyH591Zg5HwA5L8&s= =3Duf8SxTOp8zu-Q5H9l-Ko-UoLUgvfuN5bvLdJwe6kUXc&e=3D >=20 > Signed-off-by: David Woodhouse > Signed-off-by: KarimAllah Ahmed > --- > arch/x86/kvm/svm.c | 14 ++++++++++++++ > arch/x86/kvm/vmx.c | 4 ++++ > 2 files changed, 18 insertions(+) >=20 > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 2744b973..cfdb9ab 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -529,6 +529,7 @@ struct svm_cpu_data { > =09struct kvm_ldttss_desc *tss_desc; > =20 > =09struct page *save_area; > +=09struct vmcb *current_vmcb; > }; > =20 > static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data); > @@ -918,6 +919,9 @@ static void svm_vcpu_init_msrpm(u32 *msrpm) > =20 > =09=09set_msr_interception(msrpm, direct_access_msrs[i].index, 1, 1); > =09} > + > +=09if (boot_cpu_has(X86_FEATURE_AMD_PRED_CMD)) > +=09=09set_msr_interception(msrpm, MSR_IA32_PRED_CMD, 1, 1); > } > =20 > static void add_msr_offset(u32 offset) > @@ -1706,11 +1710,17 @@ static void svm_free_vcpu(struct kvm_vcpu > *vcpu) > =09__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); > =09kvm_vcpu_uninit(vcpu); > =09kmem_cache_free(kvm_vcpu_cache, svm); > +=09/* > +=09 * The vmcb page can be recycled, causing a false negative in > +=09 * svm_vcpu_load(). So do a full IBPB now. > +=09 */ > +=09indirect_branch_prediction_barrier(); > } > =20 > static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > { > =09struct vcpu_svm *svm =3D to_svm(vcpu); > +=09struct svm_cpu_data *sd =3D per_cpu(svm_data, cpu); > =09int i; > =20 > =09if (unlikely(cpu !=3D vcpu->cpu)) { > @@ -1739,6 +1749,10 @@ static void svm_vcpu_load(struct kvm_vcpu > *vcpu, int cpu) > =09if (static_cpu_has(X86_FEATURE_RDTSCP)) > =09=09wrmsrl(MSR_TSC_AUX, svm->tsc_aux); > =20 > +=09if (sd->current_vmcb !=3D svm->vmcb) { > +=09=09sd->current_vmcb =3D svm->vmcb; > +=09=09indirect_branch_prediction_barrier(); > +=09} > =09avic_vcpu_load(vcpu, cpu); > } > =20 > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index d1e25db..3b64de2 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -2279,6 +2279,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, > int cpu) > =09if (per_cpu(current_vmcs, cpu) !=3D vmx->loaded_vmcs->vmcs) { > =09=09per_cpu(current_vmcs, cpu) =3D vmx->loaded_vmcs->vmcs; > =09=09vmcs_load(vmx->loaded_vmcs->vmcs); > +=09=09indirect_branch_prediction_barrier(); > =09} > =20 > =09if (!already_loaded) { > @@ -6791,6 +6792,9 @@ static __init int hardware_setup(void) > =09=09kvm_tsc_scaling_ratio_frac_bits =3D 48; > =09} > =20 > +=09if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) > +=09=09vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false); > + > =09vmx_disable_intercept_for_msr(MSR_FS_BASE, false); > =09vmx_disable_intercept_for_msr(MSR_GS_BASE, false); > =09vmx_disable_intercept_for_msr(MSR_KERNEL_GS_BASE, true); > --=20 > 2.7.4 Isn't it cleaner to check for "boot_cpu_has(X86_FEATURE_IBPB)" both in svm_= vcpu_init_msrpm() and hardware_setup()? -Liran