From mboxrd@z Thu Jan 1 00:00:00 1970 From: Takuya Yoshikawa Subject: Re: [RFC] KVM MMU: improve large munmap efficiency Date: Fri, 27 Jan 2012 10:13:06 +0900 Message-ID: <4F21FA22.7020709@oss.ntt.co.jp> References: <4F21F703.8040000@oss.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: KVM To: Eric Northup Return-path: Received: from serv2.oss.ntt.co.jp ([222.151.198.100]:53202 "EHLO serv2.oss.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752720Ab2A0BLd (ORCPT ); Thu, 26 Jan 2012 20:11:33 -0500 In-Reply-To: <4F21F703.8040000@oss.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: (2012/01/27 9:59), Takuya Yoshikawa wrote: >> We have seen delays of over 30 seconds doing a large (128GB) unmap. >> >> It'd be nicer to check if the amount of work to be done by the entire >> flush is less than the work to be done iterating over each HVA page, >> but that information isn't currently available to the arch- >> independent part of KVM. > > Using the number of (active) shadow pages may be one way. > > See kvm->arch.n_used_mmu_pages. Ah, sorry, you are looking for arch independent information. > > >> >> Better ideas would be most welcome ;-) > > > I will soon, this weekend if possible, send a patch series which may > result in speeding up kvm_unmap_hva() loop. ... and I also need to check if my work can be naturally implemented by arch independent manner. Takuya > > Though my work has been done for optimizing a different thing, dirty > logging, I think this loop will also be optimized. > > I have checked that dirty logging improved significantly, > so hope that your case will also. > > So, in addition to your patch, please see to what extent my patch series > will help your case, if possible. > > Thanks, > Takuya