From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 2/5] KVM: MMU: make for_each_shadow_entry aware of largepages Date: Thu, 11 Jun 2009 09:38:54 -0300 Message-ID: <20090611123854.GA4101@amt.cnet> References: <20090609213009.436123773@amt.cnet> <20090609213312.750051328@amt.cnet> <4A2F7996.8020805@redhat.com> <4A2F7B01.90807@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, sheng.yang@intel.com To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:55348 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751128AbZFKOFY (ORCPT ); Thu, 11 Jun 2009 10:05:24 -0400 Content-Disposition: inline In-Reply-To: <4A2F7B01.90807@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Jun 10, 2009 at 12:21:05PM +0300, Avi Kivity wrote: > Avi Kivity wrote: >> Marcelo Tosatti wrote: >>> This way there is no need to add explicit checks in every >>> for_each_shadow_entry user. >>> >>> Signed-off-by: Marcelo Tosatti >>> >>> Index: kvm/arch/x86/kvm/mmu.c >>> =================================================================== >>> --- kvm.orig/arch/x86/kvm/mmu.c >>> +++ kvm/arch/x86/kvm/mmu.c >>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ >>> { >>> if (iterator->level < PT_PAGE_TABLE_LEVEL) >>> return false; >>> + >>> + if (iterator->level == PT_PAGE_TABLE_LEVEL) >>> + if (is_large_pte(*iterator->sptep)) >>> + return false; >>> >>> >> s/==/>/? >> > > Ah, it's actually fine. But changing == to >= will make it 1GBpage-ready. Humpf, better check level explicitly before interpreting bit 7, so lets skip this for 1GB pages.