From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: [patch 2/5] KVM: MMU: make for_each_shadow_entry aware of largepages Date: Tue, 09 Jun 2009 18:30:11 -0300 Message-ID: <20090609213312.750051328@amt.cnet> References: <20090609213009.436123773@amt.cnet> Cc: avi@redhat.com, sheng.yang@intel.com, Marcelo Tosatti To: kvm@vger.kernel.org Return-path: Received: from mx2.redhat.com ([66.187.237.31]:42285 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756754AbZFIVex (ORCPT ); Tue, 9 Jun 2009 17:34:53 -0400 Content-Disposition: inline; filename=kvm-shadow-iterator-largepage-descend Sender: kvm-owner@vger.kernel.org List-ID: This way there is no need to add explicit checks in every for_each_shadow_entry user. Signed-off-by: Marcelo Tosatti Index: kvm/arch/x86/kvm/mmu.c =================================================================== --- kvm.orig/arch/x86/kvm/mmu.c +++ kvm/arch/x86/kvm/mmu.c @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ { if (iterator->level < PT_PAGE_TABLE_LEVEL) return false; + + if (iterator->level == PT_PAGE_TABLE_LEVEL) + if (is_large_pte(*iterator->sptep)) + return false; + iterator->index = SHADOW_PT_INDEX(iterator->addr, iterator->level); iterator->sptep = ((u64 *)__va(iterator->shadow_addr)) + iterator->index; return true;