From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended Date: Thu, 5 Jul 2012 14:50:00 +0300 Message-ID: <20120705114959.GD15459@redhat.com> References: <20120705195607.2e940693.yoshikawa.takuya@oss.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: avi@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from mx1.redhat.com ([209.132.183.28]:29620 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751182Ab2GELuD (ORCPT ); Thu, 5 Jul 2012 07:50:03 -0400 Content-Disposition: inline In-Reply-To: <20120705195607.2e940693.yoshikawa.takuya@oss.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Jul 05, 2012 at 07:56:07PM +0900, Takuya Yoshikawa wrote: > The following commit changed mmu_shrink() so that it would skip VMs > whose n_used_mmu_pages is not zero and try to free pages from others: > Oops, > commit 1952639665e92481c34c34c3e2a71bf3e66ba362 > KVM: MMU: do not iterate over all VMs in mmu_shrink() > > This patch fixes the function so that it can free mmu pages as before. > > Note that "if (!nr_to_scan--)" check is removed since we do not try to > free mmu pages from more than one VM. > IIRC this was proposed in the past that we should iterate over vm list until freeing something eventually, but Avi was against it. I think the probability of a VM with kvm->arch.n_used_mmu_pages == 0 is low, so it looks OK to drop nr_to_scan to me. > Signed-off-by: Takuya Yoshikawa > Cc: Gleb Natapov > --- > arch/x86/kvm/mmu.c | 5 +---- > 1 files changed, 1 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 3b53d9e..5fd268a 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -3957,11 +3957,8 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) > * want to shrink a VM that only started to populate its MMU > * anyway. > */ > - if (kvm->arch.n_used_mmu_pages > 0) { > - if (!nr_to_scan--) > - break; > + if (!kvm->arch.n_used_mmu_pages) > continue; > - } > > idx = srcu_read_lock(&kvm->srcu); > spin_lock(&kvm->mmu_lock); > -- > 1.7.5.4 -- Gleb.