From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code Date: Mon, 23 Aug 2010 23:07:21 -0300 Message-ID: <20100824020721.GA14726@amt.cnet> References: <20100820011054.GA11297@tpepper-t61p.dolavim.us> <4C724BDB.8020604@redhat.com> <4C724D13.6000807@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Avi Kivity , Tim Pepper , Lai Jiangshan , Dave Hansen , LKML , kvm@vger.kernel.org To: Xiaotian Feng Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Mon, Aug 23, 2010 at 07:11:11PM +0800, Xiaotian Feng wrote: > On Mon, Aug 23, 2010 at 6:27 PM, Avi Kivity wrote: > > =A0On 08/23/2010 01:22 PM, Avi Kivity wrote: > >> > >> > >> I see a lot of soft lockups with this patchset: > > > > This is running the emulator.flat test case, with shadow paging. =A0= This test > > triggers a lot (millions) of mmu mode switches. > > >=20 > Does following patch fix your issue? >=20 > Latest kvm mmu_shrink code rework makes kernel changes > kvm->arch.n_used_mmu_pages/ > kvm->arch.n_max_mmu_pages at kvm_mmu_free_page/kvm_mmu_alloc_page, > which is called > by kvm_mmu_commit_zap_page. So the kvm->arch.n_used_mmu_pages or > kvm_mmu_available_pages(vcpu->kvm) is unchanged after kvm_mmu_commit_= zap_page(), > This caused kvm_mmu_change_mmu_pages/__kvm_mmu_free_some_pages loopin= g forever. > Moving kvm_mmu_commit_zap_page would make the while loop performs as = normal. >=20 > --- > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index f52a965..7e09a21 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -1726,8 +1726,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, > unsigned int goal_nr_mmu_pages) > struct kvm_mmu_page, link= ); > kvm_mmu_prepare_zap_page(kvm, page, > &inval= id_list); > + kvm_mmu_commit_zap_page(kvm, &invalid_list); > } > - kvm_mmu_commit_zap_page(kvm, &invalid_list); > goal_nr_mmu_pages =3D kvm->arch.n_used_mmu_pages; > } >=20 > @@ -2976,9 +2976,9 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu = *vcpu) > sp =3D container_of(vcpu->kvm->arch.active_mmu_pages.= prev, > struct kvm_mmu_page, link); > kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list= ); > + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); > ++vcpu->kvm->stat.mmu_recycled; > } > - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); > } >=20 > int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_c= ode) Please resend with a signed-off-by, and proper subject for the patch. Thanks