kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Takuya Yoshikawa <takuya.yoshikawa@gmail.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>,
	avi@redhat.com, kvm@vger.kernel.org, gleb@redhat.com
Subject: Re: [PATCH RESEND] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended
Date: Wed, 15 Aug 2012 11:11:51 +0900	[thread overview]
Message-ID: <20120815111151.96fc2553d25fc3bb6b69d4e4@gmail.com> (raw)
In-Reply-To: <20120814151712.GA14582@amt.cnet>

On Tue, 14 Aug 2012 12:17:12 -0300
Marcelo Tosatti <mtosatti@redhat.com> wrote:

> -               if (kvm->arch.n_used_mmu_pages > 0) {
> -                       if (!nr_to_scan--)
> -                               break;

-- (*1)

> +               if (!kvm->arch.n_used_mmu_pages)
>                         continue;

-- (*2)

> -               }
> 
>                 idx = srcu_read_lock(&kvm->srcu);
>                 spin_lock(&kvm->mmu_lock);
> 
> This patch removes the maximum (successful) loops, which is nr_scan ==
> sc->nr_to_scan.

IIUC, there was no successful loop from the beginning:

  if (kvm->arch.n_used_mmu_pages > 0) {
    if (!nr_to_scan--)
      break;  -- (*1)
    continue; -- (*2)
  }

Before the patch even when we find a VM with kvm->arch.n_used_mmu_pages
greater than 0, we just do either:
  skip it (*2) or
  break   (*1) if nr_to_scan becomes 0.

We only reach to
  kvm_mmu_remove_some_alloc_mmu_pages(kvm, &invalid_list);
  kvm_mmu_commit_zap_page(kvm, &invalid_list);
when (kvm->arch.n_used_mmu_pages == 0) that is probably why 

  commit 85b7059169e128c57a3a8a3e588fb89cb2031da1
  KVM: MMU: fix shrinking page from the empty mmu

could hit the very unlikely condition so easily.

So we are just looping for trying to free from empty MMUs.

> The description above where you say 'possibility that we see
> "n_used_mmu_pages == 0" 128 times' does not match the patch above.

Sorry about that.

> If the patch is correct, then please explain it clearly in the
> changelog.

Yes, I will do so.

> What is the reasoning to remove nr_to_scan? What tests did you perform?

I just confirmed:
 - mmu_shrink() did not free any pages:
   just checked all VMs and did "continue"
 - with my patch, it could free from the first VM with (n_used_mmu_pages > 0)

About nr_to_scan:
If my explanation above is right, this is not functioning at all.
But since it will not hurt anyone and may help us when we change
our batch size, I won't remove it in the next version.

Thanks,
	Takuya

  reply	other threads:[~2012-08-15  2:11 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-10  8:16 [PATCH RESEND] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended Takuya Yoshikawa
2012-08-13 22:15 ` Marcelo Tosatti
2012-08-14  0:06   ` Takuya Yoshikawa
2012-08-14 15:17     ` Marcelo Tosatti
2012-08-15  2:11       ` Takuya Yoshikawa [this message]
2012-08-15 18:22         ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120815111151.96fc2553d25fc3bb6b69d4e4@gmail.com \
    --to=takuya.yoshikawa@gmail.com \
    --cc=avi@redhat.com \
    --cc=gleb@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=yoshikawa.takuya@oss.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).