From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by aws-us-west-2-korg-lkml-1.web.codeaurora.org (Postfix) with ESMTP id 36CF8C433EF for ; Tue, 12 Jun 2018 15:12:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E67F0208B2 for ; Tue, 12 Jun 2018 15:12:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E67F0208B2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934367AbeFLPMM (ORCPT ); Tue, 12 Jun 2018 11:12:12 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:58328 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S934257AbeFLPMK (ORCPT ); Tue, 12 Jun 2018 11:12:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C49178011059; Tue, 12 Jun 2018 15:12:09 +0000 (UTC) Received: from vitty.brq.redhat.com.redhat.com (unknown [10.43.2.155]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4A17010194; Tue, 12 Jun 2018 15:12:08 +0000 (UTC) From: Vitaly Kuznetsov To: Tianyu Lan Cc: "pbonzini\@redhat.com" , "rkrcmar\@redhat.com" , "tglx\@linutronix.de" , "mingo\@redhat.com" , "hpa\@zytor.com" , "x86\@kernel.org" , "kvm\@vger.kernel.org" , "linux-kernel\@vger.kernel.org" , KY Srinivasan Subject: Re: [RFC Patch 3/3] KVM/x86: Add tlb_remote_flush callback support for vmcs References: <20180604090749.489-1-Tianyu.Lan@microsoft.com> <20180604090749.489-4-Tianyu.Lan@microsoft.com> Date: Tue, 12 Jun 2018 17:12:07 +0200 In-Reply-To: <20180604090749.489-4-Tianyu.Lan@microsoft.com> (Tianyu Lan's message of "Mon, 4 Jun 2018 09:08:34 +0000") Message-ID: <87h8m8qbso.fsf@vitty.brq.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Tue, 12 Jun 2018 15:12:09 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Tue, 12 Jun 2018 15:12:09 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'vkuznets@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Tianyu Lan writes: > Register tlb_remote_flush callback for vmcs when hyperv capability of > nested guest mapping flush is detected. The interface can help to reduce > overhead when flush ept table among vcpus for nested VM. The tradition way > is to send IPIs to all affected vcpus and executes INVEPT on each vcpus. > It will trigger several vmexits for IPI and INVEPT emulation. Hyperv provides > such hypercall to do flush for all vcpus. > > Signed-off-by: Lan Tianyu > --- > arch/x86/kvm/vmx.c | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index e50beb76d846..6cb241c05690 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -4737,6 +4737,17 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid, > } > } > > +static int vmx_remote_flush_tlb(struct kvm *kvm) > +{ > + struct kvm_vcpu *vcpu = kvm_get_vcpu(kvm, 0); > + > + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) > + return -1; Why vcpu0? Can arch.mmu.root_hpa-s differ across vCPUs? What happens if they do? > + > + return hyperv_flush_guest_mapping(construct_eptp(vcpu, > + vcpu->arch.mmu.root_hpa)); > +} The 'vmx_remote_flush_tlb' name looks generic enough but it is actually Hyper-V-specific. I'd suggest renaming to something like hv_remote_flush_tlb(). > + > static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa) > { > __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa); > @@ -7495,6 +7506,10 @@ static __init int hardware_setup(void) > if (enable_ept && !cpu_has_vmx_ept_2m_page()) > kvm_disable_largepages(); > > + if (ms_hyperv.nested_features & HV_X64_NESTED_GUSET_MAPPING_FLUSH > + && enable_ept) > + kvm_x86_ops->tlb_remote_flush = vmx_remote_flush_tlb; > + > if (!cpu_has_vmx_ple()) { > ple_gap = 0; > ple_window = 0; -- Vitaly