From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>,
LKML <linux-kernel@vger.kernel.org>, KVM <kvm@vger.kernel.org>
Subject: Re: [PATCH 6/6] KVM: MMU: fast zap all shadow pages
Date: Tue, 19 Mar 2013 23:37:38 +0800 [thread overview]
Message-ID: <51488642.4060809@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130319144043.GA8767@amt.cnet>
On 03/19/2013 10:40 PM, Marcelo Tosatti wrote:
>
> I misunderstood the benefit of your idea (now i got it: zap root
> and flush TLB guarantees vcpus will refault). What i'd like to avoid is
>
> memset(cache, 0, sizeof(*cache));
> kvm_mmu_init(kvm);
>
> I'd prefer normal operations on those data structures (in mmu_cache).
> And also the page accounting is a problem.
>
> Perhaps you can use a generation number to consider whether shadow pages
> are valid? So:
>
> find_sp(gfn_t gfn)
> lookup hash
> if sp->generation_number != mmu->current_generation_number
> initialize page as if it were just allocated (but keep it in the hash list)
>
> And on kvm_mmu_zap_all()
> spin_lock(mmu_lock)
> for each page
> if page->root_count
> zero sp->spt[]
>
> flush TLB
> mmu->current_generation_number++
> spin_unlock(mmu_lock)
>
> Then have kvm_mmu_free_all() that actually frees all data.
>
> Hum, not sure if thats any better than your current patchset.
I also got the idea of generation number like yours which i mentioned
it in the [PATCH 0/6]:
* TODO
Avoiding Marcelo beat me :), they are some works not attached to make the
patchset more smaller:
(1): batch kvm_reload_remote_mmus for zapping root shadow pages
(2): free shadow pages by using generation-number
(3): remove unneeded kvm_reload_remote_mmus after kvm_mmu_zap_all
(4): drop unnecessary @npages from kvm_arch_create_memslot
(5): rename init_kvm_mmu to init_vcpu_mmu
> Well, maybe resend patchset with bug fixes / improvements and
> we go from there.
I agree. Thanks for your time, Marcelo!
next prev parent reply other threads:[~2013-03-19 15:37 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-13 4:55 [PATCH 0/6] KVM: MMU: fast zap all shadow pages Xiao Guangrong
2013-03-13 4:55 ` [PATCH 1/6] KVM: MMU: move mmu related members into a separate struct Xiao Guangrong
2013-03-13 4:56 ` [PATCH 2/6] KVM: MMU: introduce mmu_cache->pte_list_descs Xiao Guangrong
2013-03-13 4:57 ` [PATCH 3/6] KVM: x86: introduce memslot_set_lpage_disallowed Xiao Guangrong
2013-03-13 4:57 ` [PATCH 4/6] KVM: x86: introduce kvm_clear_all_gfn_page_info Xiao Guangrong
2013-03-13 4:58 ` [PATCH 5/6] KVM: MMU: delete shadow page from hash list in kvm_mmu_prepare_zap_page Xiao Guangrong
2013-03-13 4:59 ` [PATCH 6/6] KVM: MMU: fast zap all shadow pages Xiao Guangrong
2013-03-14 1:07 ` Marcelo Tosatti
2013-03-14 1:35 ` Marcelo Tosatti
2013-03-14 4:42 ` Xiao Guangrong
2013-03-18 20:46 ` Marcelo Tosatti
2013-03-19 3:06 ` Xiao Guangrong
2013-03-19 14:40 ` Marcelo Tosatti
2013-03-19 15:37 ` Xiao Guangrong [this message]
2013-03-19 22:37 ` Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51488642.4060809@linux.vnet.ibm.com \
--to=xiaoguangrong@linux.vnet.ibm.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).