From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>,
Takuya Yoshikawa <takuya.yoshikawa@gmail.com>,
Gleb Natapov <gleb@redhat.com>,
kvm@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte
Date: Thu, 14 Mar 2013 13:36:24 +0800 [thread overview]
Message-ID: <514161D8.6020603@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130314023953.GB18111@amt.cnet>
On 03/14/2013 10:39 AM, Marcelo Tosatti wrote:
> On Thu, Mar 14, 2013 at 11:26:41AM +0900, Takuya Yoshikawa wrote:
>> On Wed, 13 Mar 2013 22:58:21 -0300
>> Marcelo Tosatti <mtosatti@redhat.com> wrote:
>>
>>>>> In zap_spte, don't we need to search the pointer to be removed from the
>>>>> global mmio-rmap list? How long can that list be?
>>>>
>>>> It is not bad. On softmmu, the rmap list has already been long more than 300.
>>>> On hardmmu, normally the mmio spte is not frequently zapped (just set not clear).
>>
>> mmu_shrink() is an exception.
>>
>>>>
>>>> The worst case is zap-all-mmio-spte that removes all mmio-spte. This operation
>>>> can be speed up after applying my previous patch:
>>>> KVM: MMU: fast drop all spte on the pte_list
>>
>> My point is other code may need to care more about latency.
>>
>> Zapping all mmio sptes can happen only when changing memory regions:
>> not so latency severe but should be reasonably fast not to hold
>> mmu_lock for a (too) long time.
>>
>> Compared to that, mmu_shrink() may be called any time and adding
>> more work to it should be avoided IMO. It should return ASAP.
Hmm? How frequently is of mmu_shrink? Well, it would be heavy sometimes, but
is not the case on normal running.
How many mmio shdow pages we got in the system? Not many, especially on the
virtio supported guest.
And, if it is a real problem, it is worthwhile to optimize it since it is
more worse for normal page rmap on shadow mmu.
I have a idea to avoid holding mmu-lock that i mentioned in the previous mail
that is cache generation-number into mmio spte. When zap mmio spte is needed,
we can just simply increase the global generation-number.
>
> Good point.
>
>> In general, we should try hard to keep ourselves from affecting
>> unrelated code path for optimizing something. The global pte
>> list is something which can affect many code paths in the future.
>>
>>
>> So, I'm fine with trying mmio-rmap once we can actually measure
>> very long mmu_lock hold time by traversing shadow pages.
>>
>> How about applying this first and then see the effect on big guests?
>
> Works for me. Xiao?
Marcelo, I do not persist in it. ;)
next prev parent reply other threads:[~2013-03-14 5:36 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-12 8:43 [PATCH 0/2] KVM: Optimize mmio spte zapping when creating/moving memslot Takuya Yoshikawa
2013-03-12 8:44 ` [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte Takuya Yoshikawa
2013-03-13 5:06 ` Xiao Guangrong
2013-03-13 7:28 ` Takuya Yoshikawa
2013-03-13 7:42 ` Xiao Guangrong
2013-03-13 12:33 ` Gleb Natapov
2013-03-13 12:42 ` Xiao Guangrong
2013-03-13 13:40 ` Takuya Yoshikawa
2013-03-13 14:05 ` Xiao Guangrong
2013-03-14 1:58 ` Marcelo Tosatti
2013-03-14 2:26 ` Takuya Yoshikawa
2013-03-14 2:39 ` Marcelo Tosatti
2013-03-14 5:36 ` Xiao Guangrong [this message]
2013-03-14 5:13 ` Xiao Guangrong
2013-03-14 5:45 ` Xiao Guangrong
2013-03-16 2:01 ` Takuya Yoshikawa
2013-03-12 8:45 ` [PATCH 2/2] KVM: x86: Optimize mmio spte zapping when creating/moving memslot Takuya Yoshikawa
2013-03-12 12:06 ` Gleb Natapov
2013-03-13 1:40 ` Marcelo Tosatti
2013-03-13 1:41 ` [PATCH 0/2] KVM: " Marcelo Tosatti
2013-03-14 8:23 ` Gleb Natapov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=514161D8.6020603@linux.vnet.ibm.com \
--to=xiaoguangrong@linux.vnet.ibm.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=takuya.yoshikawa@gmail.com \
--cc=yoshikawa_takuya_b1@lab.ntt.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox