From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: [PATCH 5/8] KVM: MMU: optimize walking unsync shadow page Date: Fri, 16 Dec 2011 18:16:53 +0800 Message-ID: <4EEB1A95.6030708@linux.vnet.ibm.com> References: <4EEB19AF.5070501@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Avi Kivity , Marcelo Tosatti , LKML , KVM To: Xiao Guangrong Return-path: Received: from e28smtp01.in.ibm.com ([122.248.162.1]:56803 "EHLO e28smtp01.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755203Ab1LPKRC (ORCPT ); Fri, 16 Dec 2011 05:17:02 -0500 Received: from /spool/local by e28smtp01.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 16 Dec 2011 15:46:59 +0530 In-Reply-To: <4EEB19AF.5070501@linux.vnet.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: Exactly, unysnc_children is the number of unsync sptes, we can use to avoid unnecessary spte fetching Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 5 ++++- 1 files changed, 4 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9bd2084..16e0642 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1468,12 +1468,15 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, struct kvm_mmu_pages *pvec) { u64 *spte; - int i, ret, nr_unsync_leaf = 0; + int i, ret, nr_unsync_leaf = 0, unysnc_children = sp->unsync_children; for_each_unsync_children(sp, spte, i) { struct kvm_mmu_page *child; u64 ent = *spte; + if (!unysnc_children--) + break; + WARN_ON(!is_shadow_present_pte(ent) || is_large_pte(ent)); child = page_header(ent & PT64_BASE_ADDR_MASK); -- 1.7.7.4