From: Marcelo Tosatti <mtosatti@redhat.com>
To: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: kvm <kvm@vger.kernel.org>, Ulrich Obergfell <uobergfe@redhat.com>,
Takuya Yoshikawa <takuya.yoshikawa@gmail.com>,
Avi Kivity <avi.kivity@gmail.com>
Subject: Re: KVM: MMU: improve n_max_mmu_pages calculation with TDP
Date: Thu, 21 Mar 2013 11:29:19 -0300 [thread overview]
Message-ID: <20130321142919.GA30837@amt.cnet> (raw)
In-Reply-To: <514A9DA7.10702@linux.vnet.ibm.com>
On Thu, Mar 21, 2013 at 01:41:59PM +0800, Xiao Guangrong wrote:
> On 03/21/2013 04:14 AM, Marcelo Tosatti wrote:
> >
> > kvm_mmu_calculate_mmu_pages numbers,
> >
> > maximum number of shadow pages = 2% of mapped guest pages
> >
> > Does not make sense for TDP guests where mapping all of guest
> > memory with 4k pages cannot exceed "mapped guest pages / 512"
> > (not counting root pages).
> >
> > Allow that maximum for TDP, forcing the guest to recycle otherwise.
> >
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> >
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index 956ca35..a9694a8d7 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -4293,7 +4293,7 @@ nomem:
> > unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
> > {
> > unsigned int nr_mmu_pages;
> > - unsigned int nr_pages = 0;
> > + unsigned int i, nr_pages = 0;
> > struct kvm_memslots *slots;
> > struct kvm_memory_slot *memslot;
> >
> > @@ -4302,7 +4302,19 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
> > kvm_for_each_memslot(memslot, slots)
> > nr_pages += memslot->npages;
> >
> > - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000;
> > + if (tdp_enabled) {
> > + /* one root page */
> > + nr_mmu_pages = 1;
> > + /* nr_pages / (512^i) per level, due to
> > + * guest RAM map being linear */
> > + for (i = 1; i < 4; i++) {
> > + int nr_pages_round = nr_pages + (1 << (9*i));
> > + nr_mmu_pages += nr_pages_round >> (9*i);
> > + }
>
> Marcelo,
>
> Can it work if nested guest is used? Did you see any problem in practice (direct guest
> uses more memory than your calculation)?
Direct guest can use more than the calculation by switching between
different paging modes.
About nested guest: at one point in time the working set cannot exceed
the number of physical pages visible by the guest.
Allowing an excessively high number of shadow pages is a security
concern, also, as unpreemptable long operations are necessary to tear
down the pages.
> And mmio also can build some page table that looks like not considered
> in this patch.
Right, but its only a few pages. Same argument as above: working set at
one given time is smaller than total RAM. Do you see any potential
problem?
next prev parent reply other threads:[~2013-03-21 14:29 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-20 20:14 KVM: MMU: improve n_max_mmu_pages calculation with TDP Marcelo Tosatti
2013-03-21 5:41 ` Xiao Guangrong
2013-03-21 14:29 ` Marcelo Tosatti [this message]
2013-03-22 3:00 ` Xiao Guangrong
2013-03-22 10:31 ` Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130321142919.GA30837@amt.cnet \
--to=mtosatti@redhat.com \
--cc=avi.kivity@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=takuya.yoshikawa@gmail.com \
--cc=uobergfe@redhat.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox