public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: KarimAllah Ahmed <karahmed@amazon.de>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org
Cc: hpa@zytor.com, jmattson@google.com, mingo@redhat.com,
	rkrcmar@redhat.com, tglx@linutronix.de
Subject: Re: [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page
Date: Thu, 12 Apr 2018 16:59:54 +0200	[thread overview]
Message-ID: <cedf2ad3-43ef-c2ec-0455-4dc27842f71a@redhat.com> (raw)
In-Reply-To: <1519235241-6500-1-git-send-email-karahmed@amazon.de>

On 21/02/2018 18:47, KarimAllah Ahmed wrote:
> For the most part, KVM can handle guest memory that does not have a struct
> page (i.e. not directly managed by the kernel). However, There are a few places
> in the code, specially in the nested code, that does not support that.
> 
> Patch 1, 2, and 3 avoid the mapping and unmapping all together and just
> directly use kvm_guest_read and kvm_guest_write.
> 
> Patch 4 introduces a new guest mapping interface that encapsulate all the
> bioler plate code that is needed to map and unmap guest memory. It also
> supports guest memory without "struct page".
> 
> Patch 5, 6, 7, 8, 9, and 10 switch most of the offending code in VMX and hyperv
> to use the new guest mapping API.
> 
> This patch series is the first set of fixes. Handling SVM and APIC-access page
> will be handled in a different patch series.

I like the patches and the new API.  However, I'm a bit less convinced
about the caching aspect; keeping a page pinned is not the nicest thing
with respect (for example) to memory hot-unplug.

Since you're basically reinventing kmap_high, or alternatively
(depending on your background) xc_map_foreign_pages, it's not surprising
that memremap is slow.  How slow is it really (as seen e.g. with
vmexit.flat running in L1, on EC2 compared to vanilla KVM)?

Perhaps you can keep some kind of per-CPU cache of the last N remapped
pfns?  This cache would sit between memremap and __kvm_map_gfn and it
would be completely transparent to the layer below since it takes raw
pfns.  This removes the need to store the memslots generation etc.  (If
you go this way please place it in virt/kvm/pfncache.[ch], since
kvm_main.c is already way too big).

Thanks,

Paolo

> KarimAllah Ahmed (10):
>   X86/nVMX: handle_vmon: Read 4 bytes from guest memory instead of
>     map->read->unmap sequence
>   X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
>     instead of map->copy->unmap sequence.
>   X86/nVMX: Update the PML table without mapping and unmapping the page
>   KVM: Introduce a new guest mapping API
>   KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
>   KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
>   KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt
>     descriptor table
>   KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
>   KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
>   KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
> 
>  arch/x86/kvm/hyperv.c    |  28 ++++-----
>  arch/x86/kvm/vmx.c       | 144 +++++++++++++++--------------------------------
>  arch/x86/kvm/x86.c       |  13 ++---
>  include/linux/kvm_host.h |  15 +++++
>  virt/kvm/kvm_main.c      |  50 ++++++++++++++++
>  5 files changed, 129 insertions(+), 121 deletions(-)
> 

  parent reply	other threads:[~2018-04-12 14:59 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1519235241-6500-1-git-send-email-karahmed@amazon.de>
2018-03-01 15:24 ` [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page Raslan, KarimAllah
2018-03-01 17:51   ` Jim Mattson
2018-03-02 17:40     ` Paolo Bonzini
     [not found] ` <1519235241-6500-5-git-send-email-karahmed@amazon.de>
2018-04-12 14:33   ` [PATCH 04/10] KVM: Introduce a new guest mapping API Paolo Bonzini
     [not found] ` <1519235241-6500-6-git-send-email-karahmed@amazon.de>
2018-04-12 14:36   ` [PATCH 05/10] KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap Paolo Bonzini
     [not found] ` <1519235241-6500-7-git-send-email-karahmed@amazon.de>
2018-04-12 14:38   ` [PATCH 06/10] KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page Paolo Bonzini
2018-04-12 17:57     ` Sean Christopherson
2018-04-12 20:23       ` Paolo Bonzini
     [not found] ` <1519235241-6500-8-git-send-email-karahmed@amazon.de>
2018-04-12 14:39   ` [PATCH 07/10] KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table Paolo Bonzini
2018-04-12 14:59 ` Paolo Bonzini [this message]
2018-04-12 21:25   ` [PATCH 00/10] KVM/X86: Handle guest memory that does not have a struct page Raslan, KarimAllah
     [not found] ` <1519235241-6500-4-git-send-email-karahmed@amazon.de>
2018-04-12 15:03   ` [PATCH 03/10] X86/nVMX: Update the PML table without mapping and unmapping the page Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cedf2ad3-43ef-c2ec-0455-4dc27842f71a@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jmattson@google.com \
    --cc=karahmed@amazon.de \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox