From: Gleb Natapov <gleb@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 0/7] KVM: MMU: fast zap all shadow pages
Date: Thu, 18 Apr 2013 19:36:03 +0300 [thread overview]
Message-ID: <20130418163603.GG5807@redhat.com> (raw)
In-Reply-To: <20130418140118.GA32288@amt.cnet>
On Thu, Apr 18, 2013 at 11:01:18AM -0300, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 12:42:39PM +0300, Gleb Natapov wrote:
> > > > that, but if not then less code is better.
> > >
> > > The number of sp->role.invalid=1 pages is small (only shadow roots). It
> > > can grow but is bounded to a handful. No improvement visible there.
> > >
> > > The number of shadow pages with old mmu_gen_number is potentially large.
> > >
> > > Returning all shadow pages to the allocator is problematic because it
> > > takes a long time (therefore the suggestion to postpone it).
> > >
> > > Spreading the work to free (or reuse) those shadow pages to individual
> > > page fault instances alleviates the mmu_lock hold time issue without
> > > significant reduction to post kvm_mmu_zap_all operation (which has to
> > > rebuilt all pagetables anyway).
> > >
> > > You prefer to modify SLAB allocator to aggressively free these stale
> > > shadow pages rather than kvm_mmu_get_page to reuse them?
> > Are you saying that what makes kvm_mmu_zap_all() slow is that we return
> > all the shadow pages to the SLAB allocator? As far as I understand what
> > makes it slow is walking over huge number of shadow pages via various
> > lists, actually releasing them to the SLAB is not an issue, otherwise
> > the problem could have been solved by just moving
> > kvm_mmu_commit_zap_page() out of the mmu_lock. If there is measurable
> > SLAB overhead from not reusing the pages I am all for reusing them, but
> > is this really the case or just premature optimization?
>
> Actually releasing them is not a problem. Walking all pages, lists and
> releasing in the process part of the problem ("returning them to the allocator"
> would have been clearer with "freeing them").
>
> Point is at some point you have to walk all pages and release their data
> structures. With Xiaos scheme its possible to avoid this lengthy process
> by either:
>
> 1) reusing the pages with stale generation number
> or
> 2) releasing them via the SLAB shrinker more aggressively
>
But is it really so? The number of allocated shadow pages are limited
via n_max_mmu_pages mechanism, so I expect most freeing to happen in
make_mmu_pages_available() which is called during page fault so freeing
will be spread across page faults more or less equally. Doing
kvm_mmu_prepare_zap_page()/kvm_mmu_commit_zap_page() and zapping unknown
number of shadow pages during kvm_mmu_get_page() just to reuse one does
not sound like a clear win to me.
> (another typo, i meant "SLAB shrinker" not "SLAB allocator").
>
> But you seem to be concerned for 1) due to code complexity issues?
>
It adds code that looks to me redundant. I may be wrong of course, if
it is a demonstrable win I am all for it.
--
Gleb.
next prev parent reply other threads:[~2013-04-18 16:36 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-20 8:30 [PATCH v2 0/7] KVM: MMU: fast zap all shadow pages Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 1/7] KVM: MMU: introduce mmu_cache->pte_list_descs Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 2/7] KVM: x86: introduce memslot_set_lpage_disallowed Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 3/7] KVM: x86: introduce kvm_clear_all_gfn_page_info Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 4/7] KVM: MMU: delete shadow page from hash list in kvm_mmu_prepare_zap_page Xiao Guangrong
2013-03-21 13:14 ` Gleb Natapov
2013-03-22 2:16 ` Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 5/7] KVM: MMU: split kvm_mmu_prepare_zap_page Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 6/7] KVM: MMU: fast zap all shadow pages Xiao Guangrong
2013-03-20 8:30 ` [PATCH v2 7/7] KVM: MMU: drop unnecessary kvm_reload_remote_mmus after kvm_mmu_zap_all Xiao Guangrong
2013-03-21 22:21 ` [PATCH v2 0/7] KVM: MMU: fast zap all shadow pages Marcelo Tosatti
2013-03-22 2:11 ` Xiao Guangrong
2013-03-22 10:01 ` Xiao Guangrong
2013-03-22 10:54 ` Marcelo Tosatti
2013-03-22 11:10 ` Xiao Guangrong
2013-03-22 11:28 ` Gleb Natapov
2013-03-22 11:39 ` Xiao Guangrong
2013-03-22 11:47 ` Gleb Natapov
2013-03-22 12:03 ` Xiao Guangrong
2013-03-22 12:12 ` Gleb Natapov
2013-03-22 12:37 ` Xiao Guangrong
2013-03-22 19:15 ` Gleb Natapov
2013-04-17 20:39 ` Marcelo Tosatti
2013-04-18 9:42 ` Gleb Natapov
2013-04-18 14:01 ` Marcelo Tosatti
2013-04-18 16:36 ` Gleb Natapov [this message]
2013-04-18 17:34 ` Marcelo Tosatti
-- strict thread matches above, loose matches on Subject: below --
2013-03-20 8:29 Xiao Guangrong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130418163603.GG5807@redhat.com \
--to=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).