From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 8/8] KVM: MMU: Move free_zapped_mmu_pages() out of the protection of mmu_lock Date: Mon, 4 Feb 2013 11:50:00 -0200 Message-ID: <20130204135000.GC9005@amt.cnet> References: <20130123191231.d66489d2.yoshikawa_takuya_b1@lab.ntt.co.jp> <20130123191811.efba4200.yoshikawa_takuya_b1@lab.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: gleb@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from mx1.redhat.com ([209.132.183.28]:25689 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752388Ab3BDOMj (ORCPT ); Mon, 4 Feb 2013 09:12:39 -0500 Content-Disposition: inline In-Reply-To: <20130123191811.efba4200.yoshikawa_takuya_b1@lab.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Jan 23, 2013 at 07:18:11PM +0900, Takuya Yoshikawa wrote: > We noticed that kvm_mmu_zap_all() could take hundreds of milliseconds > for zapping mmu pages with mmu_lock held. > > Although we need to do conditional rescheduling for completely > fixing this issue, we can reduce the hold time to some extent by moving > free_zapped_mmu_pages() out of the protection. Since invalid_list can > be very long, the effect is not negligible. > > Note: this patch does not treat non-trivial cases. > > Signed-off-by: Takuya Yoshikawa Can you describe the case thats biting? Is it /* * If memory slot is created, or moved, we need to clear all * mmio sptes. */ if (npages && old.base_gfn != mem->guest_phys_addr >> PAGE_SHIFT) { kvm_mmu_zap_all(kvm); kvm_reload_remote_mmus(kvm); } Because conditional rescheduling for kvm_mmu_zap_all() might not be desirable: KVM_SET_USER_MEMORY has low latency requirements.