kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Cc: mtosatti@redhat.com, agraf@suse.de, paulus@samba.org,
	aarcange@redhat.com, kvm@vger.kernel.org,
	kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org,
	takuya.yoshikawa@gmail.com
Subject: Re: [PATCH 3/4] KVM: MMU: Make kvm_handle_hva() handle range of addresses
Date: Mon, 18 Jun 2012 15:11:42 +0300	[thread overview]
Message-ID: <4FDF1AFE.4000607@redhat.com> (raw)
In-Reply-To: <20120615203230.2c577652.yoshikawa.takuya@oss.ntt.co.jp>

On 06/15/2012 02:32 PM, Takuya Yoshikawa wrote:
> When guest's memory is backed by THP pages, MMU notifier needs to call
> kvm_unmap_hva(), which in turn leads to kvm_handle_hva(), in a loop to
> invalidate a range of pages which constitute one huge page:
> 
>   for each guest page
>     for each memslot
>       if page is in memslot
>         unmap using rmap
> 
> This means although every page in that range is expected to be found in
> the same memslot, we are forced to check unrelated memslots many times.
> If the guest has more memslots, the situation will become worse.
> 
> This patch, together with the following patch, solves this problem by
> introducing kvm_handle_hva_range() which makes the loop look like this:
> 
>   for each memslot
>     for each guest page in memslot
>       unmap using rmap
> 
> In this new processing, the actual work is converted to the loop over
> rmap array which is much more cache friendly than before.


Moreover, if the pages are in no slot (munmap of some non-guest memory),
then we're iterating over all those pages for no purpose.

> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ba57b3b..3629f9b 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1185,10 +1185,13 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp,
>  	return 0;
>  }
>  
> -static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
> -			  unsigned long data,
> -			  int (*handler)(struct kvm *kvm, unsigned long *rmapp,
> -					 unsigned long data))
> +static int kvm_handle_hva_range(struct kvm *kvm,
> +				unsigned long start_hva,
> +				unsigned long end_hva,
> +				unsigned long data,
> +				int (*handler)(struct kvm *kvm,
> +					       unsigned long *rmapp,
> +					       unsigned long data))
>  {
>  	int j;
>  	int ret;
> @@ -1199,10 +1202,13 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
>  	slots = kvm_memslots(kvm);
>  
>  	kvm_for_each_memslot(memslot, slots) {
> -		gfn_t gfn = hva_to_gfn(hva, memslot);
> +		gfn_t gfn = hva_to_gfn(start_hva, memslot);
> +		gfn_t end_gfn = hva_to_gfn(end_hva, memslot);

These will return random results which you then use in min/max later, no?

> +
> +		gfn = max(gfn, memslot->base_gfn);
> +		end_gfn = min(end_gfn, memslot->base_gfn + memslot->npages);
>  
> -		if (gfn >= memslot->base_gfn &&
> -		    gfn < memslot->base_gfn + memslot->npages) {
> +		for (; gfn < end_gfn; gfn++) {
>  			ret = 0;
>  
>  			for (j = PT_PAGE_TABLE_LEVEL;
> @@ -1212,7 +1218,9 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
>  				rmapp = __gfn_to_rmap(gfn, j, memslot);
>  				ret |= handler(kvm, rmapp, data);

Potential for improvement: don't do 512 iterations on same large page.

Something like

    if ((gfn ^ prev_gfn) & mask(level))
        ret |= handler(...)

with clever selection of the first prev_gfn so it always matches (~gfn
maybe).


-- 
error compiling committee.c: too many arguments to function

  reply	other threads:[~2012-06-18 12:11 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-15 11:30 [RFC PATCH 0/4] KVM: Optimize MMU notifier's THP page invalidation Takuya Yoshikawa
2012-06-15 11:30 ` [PATCH 1/4] KVM: MMU: Use __gfn_to_rmap() to clean up kvm_handle_hva() Takuya Yoshikawa
2012-06-15 11:31 ` [PATCH 2/4] KVM: Introduce hva_to_gfn() for kvm_handle_hva() Takuya Yoshikawa
2012-06-15 21:49   ` Takuya Yoshikawa
2012-06-18 11:59   ` Avi Kivity
2012-06-15 11:32 ` [PATCH 3/4] KVM: MMU: Make kvm_handle_hva() handle range of addresses Takuya Yoshikawa
2012-06-18 12:11   ` Avi Kivity [this message]
2012-06-18 13:20     ` Takuya Yoshikawa
2012-06-19 13:46     ` Takuya Yoshikawa
2012-06-21  8:24       ` Avi Kivity
2012-06-21 13:41         ` Takuya Yoshikawa
2012-06-15 11:33 ` [PATCH 4/4] KVM: Introduce kvm_unmap_hva_range() for kvm_mmu_notifier_invalidate_range_start() Takuya Yoshikawa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FDF1AFE.4000607@redhat.com \
    --to=avi@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=agraf@suse.de \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=paulus@samba.org \
    --cc=takuya.yoshikawa@gmail.com \
    --cc=yoshikawa.takuya@oss.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).