From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2992925AbcBSLJO (ORCPT ); Fri, 19 Feb 2016 06:09:14 -0500 Received: from mail-wm0-f43.google.com ([74.125.82.43]:37396 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933479AbcBSLJK (ORCPT ); Fri, 19 Feb 2016 06:09:10 -0500 Subject: Re: [PATCH v3 02/11] KVM: MMU: introduce kvm_mmu_gfn_{allow,disallow}_lpage To: Xiao Guangrong References: <1455449503-20993-1-git-send-email-guangrong.xiao@linux.intel.com> <1455449503-20993-3-git-send-email-guangrong.xiao@linux.intel.com> Cc: gleb@kernel.org, mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, kai.huang@linux.intel.com, jike.song@intel.com From: Paolo Bonzini Message-ID: <56C6F7D3.3000205@redhat.com> Date: Fri, 19 Feb 2016 12:09:07 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 In-Reply-To: <1455449503-20993-3-git-send-email-guangrong.xiao@linux.intel.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/02/2016 12:31, Xiao Guangrong wrote: > Abstract the common operations from account_shadowed() and > unaccount_shadowed(), then introduce kvm_mmu_gfn_disallow_lpage() > and kvm_mmu_gfn_allow_lpage() > > These two functions will be used by page tracking in the later patch > > Signed-off-by: Xiao Guangrong > --- > arch/x86/kvm/mmu.c | 38 +++++++++++++++++++++++++------------- > arch/x86/kvm/mmu.h | 3 +++ > 2 files changed, 28 insertions(+), 13 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index de9e992..e1bb66c 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -776,21 +776,39 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn, > return &slot->arch.lpage_info[level - 2][idx]; > } > > +static void update_gfn_disallow_lpage_count(struct kvm_memory_slot *slot, > + gfn_t gfn, int count) > +{ > + struct kvm_lpage_info *linfo; > + int i; > + > + for (i = PT_DIRECTORY_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { > + linfo = lpage_info_slot(gfn, slot, i); > + linfo->disallow_lpage += count; > + WARN_ON(linfo->disallow_lpage < 0); > + } > +} > + > +void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn) > +{ > + update_gfn_disallow_lpage_count(slot, gfn, 1); > +} > + > +void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn) > +{ > + update_gfn_disallow_lpage_count(slot, gfn, -1); > +} > + > static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) > { > struct kvm_memslots *slots; > struct kvm_memory_slot *slot; > - struct kvm_lpage_info *linfo; > gfn_t gfn; > - int i; > > gfn = sp->gfn; > slots = kvm_memslots_for_spte_role(kvm, sp->role); > slot = __gfn_to_memslot(slots, gfn); > - for (i = PT_DIRECTORY_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { > - linfo = lpage_info_slot(gfn, slot, i); > - linfo->disallow_lpage += 1; > - } > + kvm_mmu_gfn_disallow_lpage(slot, gfn); > kvm->arch.indirect_shadow_pages++; > } > > @@ -798,18 +816,12 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) > { > struct kvm_memslots *slots; > struct kvm_memory_slot *slot; > - struct kvm_lpage_info *linfo; > gfn_t gfn; > - int i; > > gfn = sp->gfn; > slots = kvm_memslots_for_spte_role(kvm, sp->role); > slot = __gfn_to_memslot(slots, gfn); > - for (i = PT_DIRECTORY_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { > - linfo = lpage_info_slot(gfn, slot, i); > - linfo->disallow_lpage -= 1; > - WARN_ON(linfo->disallow_lpage < 0); > - } > + kvm_mmu_gfn_allow_lpage(slot, gfn); > kvm->arch.indirect_shadow_pages--; > } > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index 55ffb7b..de92bed 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -174,4 +174,7 @@ static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, > > void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm); > void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); > + > +void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); > +void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); > #endif > Reviewed-by: Paolo Bonzini