From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH 1/7] KVM: MMU: optimize pte write path if don't have protected sp Date: Sun, 15 May 2011 16:33:10 +0800 Message-ID: <4DCF8FC6.8050600@cn.fujitsu.com> References: <4DCEF5B1.3050706@cn.fujitsu.com> <4DCF8CBC.1040602@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , LKML , KVM To: Avi Kivity Return-path: In-Reply-To: <4DCF8CBC.1040602@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 05/15/2011 04:20 PM, Avi Kivity wrote: > On 05/15/2011 12:35 AM, Xiao Guangrong wrote: >> Simply return from kvm_mmu_pte_write path if no shadow page is >> write-protected, then we can avoid to walk all shadow pages and hold >> mmu-lock >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 2841805..971e2d2 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -498,6 +498,7 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn) >> linfo = lpage_info_slot(gfn, slot, i); >> linfo->write_count += 1; >> } >> + atomic_inc(&kvm->arch.indirect_shadow_pages); >> } >> >> static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn) >> @@ -513,6 +514,7 @@ static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn) >> linfo->write_count -= 1; >> WARN_ON(linfo->write_count< 0); >> } >> + atomic_dec(&kvm->arch.indirect_shadow_pages); >> } > > These atomic ops are always called from within the spinlock, so we don't need an atomic_t here. > > Sorry, I should have noticed this on the first version. We read indirect_shadow_pages atomically on pte write path, that is allowed out of mmu_lock