From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: [patch 2/5] KVM: MMU: make for_each_shadow_entry aware of largepages Date: Thu, 11 Jun 2009 11:02:26 -0300 Message-ID: <20090611140416.633810344@localhost.localdomain> References: <20090611140224.457657937@localhost.localdomain> Cc: kvm@vger.kernel.org, Marcelo Tosatti To: avi@redhat.com Return-path: Received: from mx2.redhat.com ([66.187.237.31]:55352 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751878AbZFKOF1 (ORCPT ); Thu, 11 Jun 2009 10:05:27 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n5BE5TbW026911 for ; Thu, 11 Jun 2009 10:05:29 -0400 In-Reply-To: <4A307819.6010503@redhat.com> Content-Disposition: inline; filename=kvm-shadow-iterator-largepage-descend Sender: kvm-owner@vger.kernel.org List-ID: This way there is no need to add explicit checks in every for_each_shadow_entry user. Signed-off-by: Marcelo Tosatti Index: kvm/arch/x86/kvm/mmu.c =================================================================== --- kvm.orig/arch/x86/kvm/mmu.c +++ kvm/arch/x86/kvm/mmu.c @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ { if (iterator->level < PT_PAGE_TABLE_LEVEL) return false; + + if (iterator->level == PT_PAGE_TABLE_LEVEL) + if (is_large_pte(*iterator->sptep)) + return false; + iterator->index = SHADOW_PT_INDEX(iterator->addr, iterator->level); iterator->sptep = ((u64 *)__va(iterator->shadow_addr)) + iterator->index; return true; --