From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [RFC] KVM MMU: improve large munmap efficiency Date: Sun, 29 Jan 2012 13:01:18 +0200 Message-ID: <4F2526FE.8040603@redhat.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: KVM To: Eric Northup Return-path: Received: from mx1.redhat.com ([209.132.183.28]:16249 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751220Ab2A2LBV (ORCPT ); Sun, 29 Jan 2012 06:01:21 -0500 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On 01/27/2012 01:24 AM, Eric Northup wrote: > Flush the shadow MMU instead of iterating over each host VA when doing > a large invalidate range callback. > > The previous code is O(N) in the number of virtual pages being > invalidated, while holding both the MMU spinlock and the mmap_sem. > Large unmaps can cause significant delay, during which the process is > unkillable. Worse, all page allocation could be delayed if there's > enough memory pressure that mmu_shrink gets called. > > Signed-off-by: Eric Northup > > --- > > We have seen delays of over 30 seconds doing a large (128GB) unmap. > > It'd be nicer to check if the amount of work to be done by the entire > flush is less than the work to be done iterating over each HVA page, > but that information isn't currently available to the arch- > independent part of KVM. > > Better ideas would be most welcome ;-) > > > Tested by attaching a debugger to a running qemu w/kvm and running > "call munmap(0, 1UL << 46)". > How about computing the intersection of (start, end) with the hva ranges in kvm->memslots? If there is no intersection, you exit immediately. It's still possible for the work to drop the intersection to be larger than dropping the entire shadow, but it's unlikely. -- error compiling committee.c: too many arguments to function