From: Marcelo Tosatti <mtosatti@redhat.com>
To: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>,
Takuya Yoshikawa <takuya.yoshikawa@gmail.com>,
Gleb Natapov <gleb@redhat.com>,
kvm@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte
Date: Wed, 13 Mar 2013 23:39:53 -0300 [thread overview]
Message-ID: <20130314023953.GB18111@amt.cnet> (raw)
In-Reply-To: <20130314112641.e2ccbc6b.yoshikawa_takuya_b1@lab.ntt.co.jp>
On Thu, Mar 14, 2013 at 11:26:41AM +0900, Takuya Yoshikawa wrote:
> On Wed, 13 Mar 2013 22:58:21 -0300
> Marcelo Tosatti <mtosatti@redhat.com> wrote:
>
> > > > In zap_spte, don't we need to search the pointer to be removed from the
> > > > global mmio-rmap list? How long can that list be?
> > >
> > > It is not bad. On softmmu, the rmap list has already been long more than 300.
> > > On hardmmu, normally the mmio spte is not frequently zapped (just set not clear).
>
> mmu_shrink() is an exception.
>
> > >
> > > The worst case is zap-all-mmio-spte that removes all mmio-spte. This operation
> > > can be speed up after applying my previous patch:
> > > KVM: MMU: fast drop all spte on the pte_list
>
> My point is other code may need to care more about latency.
>
> Zapping all mmio sptes can happen only when changing memory regions:
> not so latency severe but should be reasonably fast not to hold
> mmu_lock for a (too) long time.
>
> Compared to that, mmu_shrink() may be called any time and adding
> more work to it should be avoided IMO. It should return ASAP.
Good point.
> In general, we should try hard to keep ourselves from affecting
> unrelated code path for optimizing something. The global pte
> list is something which can affect many code paths in the future.
>
>
> So, I'm fine with trying mmio-rmap once we can actually measure
> very long mmu_lock hold time by traversing shadow pages.
>
> How about applying this first and then see the effect on big guests?
Works for me. Xiao?
next prev parent reply other threads:[~2013-03-14 2:40 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-12 8:43 [PATCH 0/2] KVM: Optimize mmio spte zapping when creating/moving memslot Takuya Yoshikawa
2013-03-12 8:44 ` [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte Takuya Yoshikawa
2013-03-13 5:06 ` Xiao Guangrong
2013-03-13 7:28 ` Takuya Yoshikawa
2013-03-13 7:42 ` Xiao Guangrong
2013-03-13 12:33 ` Gleb Natapov
2013-03-13 12:42 ` Xiao Guangrong
2013-03-13 13:40 ` Takuya Yoshikawa
2013-03-13 14:05 ` Xiao Guangrong
2013-03-14 1:58 ` Marcelo Tosatti
2013-03-14 2:26 ` Takuya Yoshikawa
2013-03-14 2:39 ` Marcelo Tosatti [this message]
2013-03-14 5:36 ` Xiao Guangrong
2013-03-14 5:13 ` Xiao Guangrong
2013-03-14 5:45 ` Xiao Guangrong
2013-03-16 2:01 ` Takuya Yoshikawa
2013-03-12 8:45 ` [PATCH 2/2] KVM: x86: Optimize mmio spte zapping when creating/moving memslot Takuya Yoshikawa
2013-03-12 12:06 ` Gleb Natapov
2013-03-13 1:40 ` Marcelo Tosatti
2013-03-13 1:41 ` [PATCH 0/2] KVM: " Marcelo Tosatti
2013-03-14 8:23 ` Gleb Natapov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130314023953.GB18111@amt.cnet \
--to=mtosatti@redhat.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=takuya.yoshikawa@gmail.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
--cc=yoshikawa_takuya_b1@lab.ntt.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox