public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
To: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
	Takuya Yoshikawa <takuya.yoshikawa@gmail.com>,
	Gleb Natapov <gleb@redhat.com>,
	Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>,
	kvm@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte
Date: Thu, 14 Mar 2013 13:45:32 +0800	[thread overview]
Message-ID: <514163FC.6000202@linux.vnet.ibm.com> (raw)
In-Reply-To: <51415C7A.402@linux.vnet.ibm.com>

On 03/14/2013 01:13 PM, Xiao Guangrong wrote:
> On 03/14/2013 09:58 AM, Marcelo Tosatti wrote:
>> On Wed, Mar 13, 2013 at 10:05:20PM +0800, Xiao Guangrong wrote:
>>> On 03/13/2013 09:40 PM, Takuya Yoshikawa wrote:
>>>> On Wed, 13 Mar 2013 20:42:41 +0800
>>>> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
>>>>
>>>>>>>>> How about save all mmio spte into a mmio-rmap?
>>>>>>>>
>>>>>>>> The problem is that other mmu code would need to care about the pointers
>>>>>>>> stored in the new rmap list: when mmu_shrink zaps shadow pages for example.
>>>>>>>
>>>>>>> It is not hard... all the codes have been wrapped by *zap_spte*.
>>>>>>>
>>>>>> So are you going to send a patch? What do you think about applying this
>>>>>> as temporary solution?
>>>>>
>>>>> Hi Gleb,
>>>>>
>>>>> Since it only needs small change based on this patch, I think we can directly
>>>>> apply the rmap-based way.
>>>>>
>>>>> Takuya, could you please do this? ;)
>>>>
>>>> Though I'm fine with my making the patch better, I'm still thinking
>>>> about the bad side of it, though.
>>>>
>>>> In zap_spte, don't we need to search the pointer to be removed from the
>>>> global mmio-rmap list?  How long can that list be?
>>>
>>> It is not bad. On softmmu, the rmap list has already been long more than 300.
>>> On hardmmu, normally the mmio spte is not frequently zapped (just set not clear).
>>>
>>> The worst case is zap-all-mmio-spte that removes all mmio-spte. This operation
>>> can be speed up after applying my previous patch:
>>> KVM: MMU: fast drop all spte on the pte_list
>>>
>>>>
>>>> Implementing it will/may not be difficult but I'm not sure if we would
>>>> get pure improvement.  Unless it becomes 99% sure, I think we should
>>>> first take a basic approach. 
>>>
>>> I definitely sure zapping all mmio-sptes is fast than zapping mmio shadow
>>> pages. ;)
>>
>> With a huge number of shadow pages (think 512GB guest, 262144 pte-level
>> shadow pages to map), it might be a problem.
> 
> That is one of the reasons why i think zap mmio shadow page is not good. ;)
> 
> This patch needs to walk all shadow pages to find all mmio shadow page out
> and zap them, it depends on how much memory is used on guest (huge memory
> causes huge shadow page as you said). But the time of zapping mmio spte is
> constant, no matter of memory used.
> 
>>
>>>> What do you think?
>>>
>>> I am considering if zap all shadow page is faster enough (after my patchset), do
>>> we really need to care it?
>>
>> Still needed: your patch reduces kvm_mmu_zap_all() time, but as you can
>> see with huge memory sized guests 100% improvement over the current
>> situation will be a bottleneck (and as you noted the deletion case is
>> still unsolved).	
> 
> The improvement can be greater if more memory is used. (I only used 2G memory in
> guest since my test case is 32bit program which can not use huge memory, and
> not lock contention in my testcase.)
> 
> Actually, the time complexity of current kvm_mmu_zap_all is the same as zap

                                    ^^^^^
Sorry, not current way. It is the optimizing way in my patchset.


  reply	other threads:[~2013-03-14  5:45 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-12  8:43 [PATCH 0/2] KVM: Optimize mmio spte zapping when creating/moving memslot Takuya Yoshikawa
2013-03-12  8:44 ` [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte Takuya Yoshikawa
2013-03-13  5:06   ` Xiao Guangrong
2013-03-13  7:28     ` Takuya Yoshikawa
2013-03-13  7:42       ` Xiao Guangrong
2013-03-13 12:33         ` Gleb Natapov
2013-03-13 12:42           ` Xiao Guangrong
2013-03-13 13:40             ` Takuya Yoshikawa
2013-03-13 14:05               ` Xiao Guangrong
2013-03-14  1:58                 ` Marcelo Tosatti
2013-03-14  2:26                   ` Takuya Yoshikawa
2013-03-14  2:39                     ` Marcelo Tosatti
2013-03-14  5:36                       ` Xiao Guangrong
2013-03-14  5:13                   ` Xiao Guangrong
2013-03-14  5:45                     ` Xiao Guangrong [this message]
2013-03-16  2:01                     ` Takuya Yoshikawa
2013-03-12  8:45 ` [PATCH 2/2] KVM: x86: Optimize mmio spte zapping when creating/moving memslot Takuya Yoshikawa
2013-03-12 12:06   ` Gleb Natapov
2013-03-13  1:40     ` Marcelo Tosatti
2013-03-13  1:41 ` [PATCH 0/2] KVM: " Marcelo Tosatti
2013-03-14  8:23 ` Gleb Natapov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=514163FC.6000202@linux.vnet.ibm.com \
    --to=xiaoguangrong@linux.vnet.ibm.com \
    --cc=gleb@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=takuya.yoshikawa@gmail.com \
    --cc=yoshikawa_takuya_b1@lab.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox