From mboxrd@z Thu Jan 1 00:00:00 1970 From: Takuya Yoshikawa Subject: Re: [RFC] KVM MMU: improve large munmap efficiency Date: Fri, 27 Jan 2012 09:59:47 +0900 Message-ID: <4F21F703.8040000@oss.ntt.co.jp> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: KVM To: Eric Northup Return-path: Received: from serv2.oss.ntt.co.jp ([222.151.198.100]:52681 "EHLO serv2.oss.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752278Ab2A0A6P (ORCPT ); Thu, 26 Jan 2012 19:58:15 -0500 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: Hi, (2012/01/27 8:24), Eric Northup wrote: > Flush the shadow MMU instead of iterating over each host VA when doing > a large invalidate range callback. > > The previous code is O(N) in the number of virtual pages being > invalidated, while holding both the MMU spinlock and the mmap_sem. > Large unmaps can cause significant delay, during which the process is > unkillable. Worse, all page allocation could be delayed if there's > enough memory pressure that mmu_shrink gets called. > > Signed-off-by: Eric Northup > > --- > > We have seen delays of over 30 seconds doing a large (128GB) unmap. > > It'd be nicer to check if the amount of work to be done by the entire > flush is less than the work to be done iterating over each HVA page, > but that information isn't currently available to the arch- > independent part of KVM. Using the number of (active) shadow pages may be one way. See kvm->arch.n_used_mmu_pages. > > Better ideas would be most welcome ;-) I will soon, this weekend if possible, send a patch series which may result in speeding up kvm_unmap_hva() loop. Though my work has been done for optimizing a different thing, dirty logging, I think this loop will also be optimized. I have checked that dirty logging improved significantly, so hope that your case will also. So, in addition to your patch, please see to what extent my patch series will help your case, if possible. Thanks, Takuya