From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 0/8] KVM: Reduce mmu_lock hold time when zapping mmu pages Date: Mon, 4 Feb 2013 11:29:56 -0200 Message-ID: <20130204132956.GA9005@amt.cnet> References: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: gleb@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from mx1.redhat.com ([209.132.183.28]:15071 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752229Ab3BDNiy (ORCPT ); Mon, 4 Feb 2013 08:38:54 -0500 Content-Disposition: inline In-Reply-To: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Jan 23, 2013 at 07:12:31PM +0900, Takuya Yoshikawa wrote: > This patch set mitigates another mmu_lock hold time issue. Although > this is not enough and I'm thinking of additional work already, this > alone can reduce the lock hold time to some extent. > > Takuya Yoshikawa (8): > KVM: MMU: Fix and clean up for_each_gfn_* macros > KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page() > KVM: MMU: Add a parameter to kvm_mmu_prepare_zap_page() to update the next position > KVM: MMU: Introduce for_each_gfn_indirect_valid_sp_safe macro > KVM: MMU: Delete hash_link node in kvm_mmu_prepare_zap_page() > KVM: MMU: Introduce free_zapped_mmu_pages() for freeing mmu pages in a list > KVM: MMU: Split out free_zapped_mmu_pages() from kvm_mmu_commit_zap_page() > KVM: MMU: Move free_zapped_mmu_pages() out of the protection of mmu_lock > > arch/x86/kvm/mmu.c | 149 +++++++++++++++++++++++++++++++++++----------------- > 1 files changed, 101 insertions(+), 48 deletions(-) Need a limit on the number of pages whose freeing is delayed. See that n_used_mmu_pages is used by both SLAB freeing (to know how much pressure to apply) and allocators (to decide when to allocate more). You allow n_used_mmu_pages to be inaccurate, which is fine as long as the error is limited. Perhaps have a max of 64 pages at invalid_pages per round and if exceeded release memory inside mmu_lock (one-by-one) ?