From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH] do not free active mmu pages in free_mmu_pages() Date: Mon, 16 Mar 2009 22:34:01 +0200 Message-ID: <20090316203401.GB7898@redhat.com> References: <20090311100755.GA19724@redhat.com> <20090316201533.GA4477@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: avi@redhat.com, marcelo@redhat.com, kvm@vger.kernel.org To: Marcelo Tosatti Return-path: Received: from mx2.redhat.com ([66.187.237.31]:57580 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754317AbZCPUhs (ORCPT ); Mon, 16 Mar 2009 16:37:48 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n2GKblpe021261 for ; Mon, 16 Mar 2009 16:37:47 -0400 Content-Disposition: inline In-Reply-To: <20090316201533.GA4477@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Mar 16, 2009 at 05:15:33PM -0300, Marcelo Tosatti wrote: > On Wed, Mar 11, 2009 at 12:07:55PM +0200, Gleb Natapov wrote: > > free_mmu_pages() should only undo what alloc_mmu_pages() does. > > > > Signed-off-by: Gleb Natapov > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index 2a36f7f..b625ed4 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -2638,14 +2638,6 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp); > > > > static void free_mmu_pages(struct kvm_vcpu *vcpu) > > { > > - struct kvm_mmu_page *sp; > > - > > - while (!list_empty(&vcpu->kvm->arch.active_mmu_pages)) { > > - sp = container_of(vcpu->kvm->arch.active_mmu_pages.next, > > - struct kvm_mmu_page, link); > > - kvm_mmu_zap_page(vcpu->kvm, sp); > > - cond_resched(); > > - } > > free_page((unsigned long)vcpu->arch.mmu.pae_root); > > } > > Doesnt the vm shutdown path rely on the while loop you removed to free > all shadow pages before freeing the mmu kmem caches, if mmu notifiers > is disabled? > Shouldn't mmu_free_roots() on all vcpus clear all mmu pages? > And how harmful is that loop? Zaps the entire cache on cpu hotunplug? > KVM doesn't support vcpu destruction, but destruction is called anyway on various error conditions. The one that easy to trigger is to create vcpu with the same id simultaneously from two threads. The result is OOPs in random places. -- Gleb.