From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: [PATCH v2 09/16] KVM: MMU: fast mmu_need_write_protect path for hard mmu Date: Fri, 13 Apr 2012 18:13:54 +0800 Message-ID: <4F87FC62.3020003@linux.vnet.ibm.com> References: <4F87FA69.5060106@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Avi Kivity , Marcelo Tosatti , LKML , KVM To: Xiao Guangrong Return-path: In-Reply-To: <4F87FA69.5060106@linux.vnet.ibm.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On direct mmu without nested, all the page is not write-protected by shadow page table protection Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 53e92de..0c6e92d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2293,6 +2293,9 @@ static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn, struct hlist_node *node; bool need_unsync = false; + if (!vcpu->kvm->arch.indirect_shadow_pages) + return 0; + for_each_gfn_indirect_valid_sp(vcpu->kvm, s, gfn, node) { if (!can_unsync) return 1; -- 1.7.7.6