From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH RFC 1/3] vmx: allow ioeventfd for EPT violations Date: Mon, 31 Aug 2015 10:53:58 +0800 Message-ID: <55E3C1C6.7030603@linux.intel.com> References: <1440925898-23440-1-git-send-email-mst@redhat.com> <1440925898-23440-2-git-send-email-mst@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, Paolo Bonzini To: "Michael S. Tsirkin" , linux-kernel@vger.kernel.org Return-path: Received: from mga02.intel.com ([134.134.136.20]:20254 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751919AbbHaC7y (ORCPT ); Sun, 30 Aug 2015 22:59:54 -0400 In-Reply-To: <1440925898-23440-2-git-send-email-mst@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 08/30/2015 05:12 PM, Michael S. Tsirkin wrote: > Even when we skip data decoding, MMIO is slightly slower > than port IO because it uses the page-tables, so the CPU > must do a pagewalk on each access. > > This overhead is normally masked by using the TLB cache: > but not so for KVM MMIO, where PTEs are marked as reserved > and so are never cached. > > As ioeventfd memory is never read, make it possible to use > RO pages on the host for ioeventfds, instead. I like this idea. > The result is that TLBs are cached, which finally makes MMIO > as fast as port IO. What does "TLBs are cached" mean? Even after applying the patch no new TLB type can be cached. > > Signed-off-by: Michael S. Tsirkin > --- > arch/x86/kvm/vmx.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 9d1bfd3..ed44026 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -5745,6 +5745,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) > vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI); > > gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS); > + if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) { > + skip_emulated_instruction(vcpu); > + return 1; > + } > + I am afraid that the common page fault entry point is not a good place to do the work. Would move it to kvm_handle_bad_page()? The different is the workload of fast_page_fault() is included but it's light enough and MMIO-exit should not be very frequent, so i think it's okay.