From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x2249p2w6uKvosQo0tNcRwzH1GS2vowS8hQ6Jhul1fr6qkubAIIPZpP8P3qZ66LlwQe/QroGR ARC-Seal: i=1; a=rsa-sha256; t=1517187842; cv=none; d=google.com; s=arc-20160816; b=L1Sj5qYlchS1ZeR5tCr4vk7Q+Qyide7iHbxT/1X6ophHLcWEEQGJDc1YQ28OdGQaEp Xp7tcE3oGScLtEoCM/57Yv3I72nmhECpxEmqQeOU7w/khk1PBpB3VDHnke0T4CsM9fwp 68hasTlJF9Wn8HbhDv0osuiJgOoS9aJXhIvHVvv4LQ5qUax82rfmitm8H1brD26KN677 nAp85oyjhUddm6GV6beVkP/vgh7g5spAQnpqVweeL8YtsrpaJ15Ggx/p1S3bkdEpnG4T JoLi8ura49n2I++s7lwzrxudqufc3VdgvMtudAga6uPW6Mu25BSbWLCs4DSOuktvS8w0 FUEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=CUGQlBMwMD6HdkNo2PzDS0F5/NTQVwDoEZgsFf39dLE=; b=xH8Y62NJKPISdLtV/ujql/hOMdoK4YEkxWybg1wJk2iCoKTkdcQjYJyHAW0OULloVB akEJvR6BqYXP8fGszXnSyH3IGG2XrN2bZoIe4n6tg2YVikmzeax/fp+dFplvZbigIP0v swCWnCFPLrvGsdgh5WaIMjbeVzKUtM+qBl1M1y0WhWkv1c8xX3uIVt+v5qPiceeBYy0J y9rkhBOoulIds/bih/rmMIKNBrHVezRgkwf+jUEs50cpU8futLxag5XsWEliXT9v2IL4 iYQZc4nYstgS9ZK2aQBlZtQH6GAHby+BJgVzEoL30IZhM9bPSiCRiEJFbphrHAqU4d66 RLAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=Hs42rgqb; spf=pass (google.com: domain of prvs=56008dfb3=karahmed@amazon.com designates 207.171.184.25 as permitted sender) smtp.mailfrom=prvs=56008dfb3=karahmed@amazon.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.de header.s=amazon201209 header.b=Hs42rgqb; spf=pass (google.com: domain of prvs=56008dfb3=karahmed@amazon.com designates 207.171.184.25 as permitted sender) smtp.mailfrom=prvs=56008dfb3=karahmed@amazon.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.de X-IronPort-AV: E=Sophos;i="5.46,428,1511827200"; d="scan'208";a="720899683" From: KarimAllah Ahmed To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org Cc: Ashok Raj , Asit Mallick , Dave Hansen , Arjan Van De Ven , Tim Chen , Linus Torvalds , Andrea Arcangeli , Andi Kleen , Thomas Gleixner , Dan Williams , Jun Nakajima , Andy Lutomirski , Greg KH , Paolo Bonzini , Peter Zijlstra , David Woodhouse , KarimAllah Ahmed Subject: [PATCH v2 3/4] x86/kvm: Add IBPB support Date: Mon, 29 Jan 2018 01:58:51 +0100 Message-Id: <1517187532-32286-4-git-send-email-karahmed@amazon.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1517187532-32286-1-git-send-email-karahmed@amazon.de> References: <1517187532-32286-1-git-send-email-karahmed@amazon.de> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1590886758692536683?= X-GMAIL-MSGID: =?utf-8?q?1590886758692536683?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: From: Ashok Raj Add MSR passthrough for MSR_IA32_PRED_CMD and place branch predictor barriers on switching between VMs to avoid inter VM Spectre-v2 attacks. [peterz: rebase and changelog rewrite] [karahmed: - rebase - vmx: expose PRED_CMD whenever it is available - svm: only pass through IBPB if it is available] Cc: Asit Mallick Cc: Dave Hansen Cc: Arjan Van De Ven Cc: Tim Chen Cc: Linus Torvalds Cc: Andrea Arcangeli Cc: Andi Kleen Cc: Thomas Gleixner Cc: Dan Williams Cc: Jun Nakajima Cc: Andy Lutomirski Cc: Greg KH Cc: Paolo Bonzini Signed-off-by: Ashok Raj Signed-off-by: Peter Zijlstra (Intel) Link: http://lkml.kernel.org/r/1515720739-43819-6-git-send-email-ashok.raj@intel.com Signed-off-by: David Woodhouse Signed-off-by: KarimAllah Ahmed --- arch/x86/kvm/svm.c | 14 ++++++++++++++ arch/x86/kvm/vmx.c | 4 ++++ 2 files changed, 18 insertions(+) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 2744b973..c886e46 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -529,6 +529,7 @@ struct svm_cpu_data { struct kvm_ldttss_desc *tss_desc; struct page *save_area; + struct vmcb *current_vmcb; }; static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data); @@ -918,6 +919,9 @@ static void svm_vcpu_init_msrpm(u32 *msrpm) set_msr_interception(msrpm, direct_access_msrs[i].index, 1, 1); } + + if (boot_cpu_has(X86_FEATURE_IBPB)) + set_msr_interception(msrpm, MSR_IA32_PRED_CMD, 1, 1); } static void add_msr_offset(u32 offset) @@ -1706,11 +1710,17 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, svm); + /* + * The vmcb page can be recycled, causing a false negative in + * svm_vcpu_load(). So do a full IBPB now. + */ + indirect_branch_prediction_barrier(); } static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { struct vcpu_svm *svm = to_svm(vcpu); + struct svm_cpu_data *sd = per_cpu(svm_data, cpu); int i; if (unlikely(cpu != vcpu->cpu)) { @@ -1739,6 +1749,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (static_cpu_has(X86_FEATURE_RDTSCP)) wrmsrl(MSR_TSC_AUX, svm->tsc_aux); + if (sd->current_vmcb != svm->vmcb) { + sd->current_vmcb = svm->vmcb; + indirect_branch_prediction_barrier(); + } avic_vcpu_load(vcpu, cpu); } diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index dac564d..f82a44c 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2296,6 +2296,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; vmcs_load(vmx->loaded_vmcs->vmcs); + indirect_branch_prediction_barrier(); } if (!already_loaded) { @@ -9613,6 +9614,9 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) goto free_msrs; msr_bitmap = vmx->vmcs01.msr_bitmap; + + if (boot_cpu_has(X86_FEATURE_IBPB)) + vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_PRED_CMD, MSR_TYPE_RW); vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW); vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW); vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW); -- 2.7.4