From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tianyu Lan Subject: [PATCH V2 4/5] KVM/x86: Add tlb_remote_flush callback support for vmx Date: Mon, 9 Jul 2018 09:02:54 +0000 Message-ID: <20180709090218.15342-5-Tianyu.Lan@microsoft.com> References: <20180709090218.15342-1-Tianyu.Lan@microsoft.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Stephen Hemminger , "kvm@vger.kernel.org" , "pbonzini@redhat.com" , "rkrcmar@redhat.com" , Haiyang Zhang , "x86@kernel.org" , "linux-kernel@vger.kernel.org" , "devel@linuxdriverproject.org" , "Michael Kelley \(EOSG\)" , "mingo@redhat.com" , "hpa@zytor.com" , Tianyu Lan , "tglx@linutronix.de" , "vkuznets@redhat.com" Return-path: In-Reply-To: <20180709090218.15342-1-Tianyu.Lan@microsoft.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: driverdev-devel-bounces@linuxdriverproject.org Sender: "devel" List-Id: kvm.vger.kernel.org Register tlb_remote_flush callback for vmx when hyperv capability of nested guest mapping flush is detected. The interface can help to reduce overhead when flush ept table among vcpus for nested VM. The tradition way is to send IPIs to all affected vcpus and executes INVEPT on each vcpus. It will trigger several vmexits for IPI and INVEPT emulation. Hyper-V provides such hypercall to do flush for all vcpus. Signed-off-by: Lan Tianyu --- Change since v1: Use ept_pointers_match to check condition of identical ept table pointer and get ept pointer from struct vcpu_vmx->ept_pointer. --- arch/x86/kvm/vmx.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 8142b2da430a..55fe14d1d4d4 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -4778,6 +4778,25 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid, } } +static int hv_remote_flush_tlb(struct kvm *kvm) +{ + int ret; + + spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock); + + if (!to_kvm_vmx(kvm)->ept_pointers_match) { + ret = -EFAULT; + goto out; + } + + ret = hyperv_flush_guest_mapping( + to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer); + +out: + spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock); + return ret; +} + static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa) { __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa); @@ -4968,7 +4987,7 @@ static void check_ept_pointer(struct kvm_vcpu *vcpu, u64 eptp) u64 tmp_eptp = INVALID_PAGE; int i; - if (!kvm_x86_ops->tlb_remote_flush) + if (!kvm_x86_ops->hv_tlb_remote_flush) return; spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock); @@ -7570,6 +7589,12 @@ static __init int hardware_setup(void) if (enable_ept && !cpu_has_vmx_ept_2m_page()) kvm_disable_largepages(); +#if IS_ENABLED(CONFIG_HYPERV) + if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH + && enable_ept) + kvm_x86_ops->hv_tlb_remote_flush = hv_remote_flush_tlb; +#endif + if (!cpu_has_vmx_ple()) { ple_gap = 0; ple_window = 0; -- 2.14.3