From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH 1/2] KVM: MMU: Mark sp mmio cached when creating mmio spte Date: Wed, 13 Mar 2013 22:05:20 +0800 Message-ID: <514087A0.1000704@linux.vnet.ibm.com> References: <20130312174333.7f76148e.yoshikawa_takuya_b1@lab.ntt.co.jp> <20130312174440.5d5199ee.yoshikawa_takuya_b1@lab.ntt.co.jp> <5140094F.5080700@linux.vnet.ibm.com> <20130313162816.c62899dc.yoshikawa_takuya_b1@lab.ntt.co.jp> <51402DDA.607@linux.vnet.ibm.com> <20130313123358.GM11223@redhat.com> <51407441.4020200@linux.vnet.ibm.com> <20130313224056.8c9c87f4d95b332d2273a685@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Gleb Natapov , Takuya Yoshikawa , mtosatti@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from e23smtp04.au.ibm.com ([202.81.31.146]:46574 "EHLO e23smtp04.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932281Ab3CMOFa (ORCPT ); Wed, 13 Mar 2013 10:05:30 -0400 Received: from /spool/local by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 13 Mar 2013 23:55:21 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 7202D2BB0023 for ; Thu, 14 Mar 2013 01:05:23 +1100 (EST) Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r2DDqTQ161931552 for ; Thu, 14 Mar 2013 00:52:29 +1100 Received: from d23av02.au.ibm.com (loopback [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r2DE5MQP010558 for ; Thu, 14 Mar 2013 01:05:23 +1100 In-Reply-To: <20130313224056.8c9c87f4d95b332d2273a685@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: On 03/13/2013 09:40 PM, Takuya Yoshikawa wrote: > On Wed, 13 Mar 2013 20:42:41 +0800 > Xiao Guangrong wrote: > >>>>>> How about save all mmio spte into a mmio-rmap? >>>>> >>>>> The problem is that other mmu code would need to care about the pointers >>>>> stored in the new rmap list: when mmu_shrink zaps shadow pages for example. >>>> >>>> It is not hard... all the codes have been wrapped by *zap_spte*. >>>> >>> So are you going to send a patch? What do you think about applying this >>> as temporary solution? >> >> Hi Gleb, >> >> Since it only needs small change based on this patch, I think we can directly >> apply the rmap-based way. >> >> Takuya, could you please do this? ;) > > Though I'm fine with my making the patch better, I'm still thinking > about the bad side of it, though. > > In zap_spte, don't we need to search the pointer to be removed from the > global mmio-rmap list? How long can that list be? It is not bad. On softmmu, the rmap list has already been long more than 300. On hardmmu, normally the mmio spte is not frequently zapped (just set not clear). The worst case is zap-all-mmio-spte that removes all mmio-spte. This operation can be speed up after applying my previous patch: KVM: MMU: fast drop all spte on the pte_list > > Implementing it will/may not be difficult but I'm not sure if we would > get pure improvement. Unless it becomes 99% sure, I think we should > first take a basic approach. I definitely sure zapping all mmio-sptes is fast than zapping mmio shadow pages. ;) > > What do you think? I am considering if zap all shadow page is faster enough (after my patchset), do we really need to care it?