From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH] KVM: MMU: Fix mmu_shrink() so that it can free mmu pages as intended Date: Fri, 20 Jul 2012 11:42:12 -0300 Message-ID: <20120720144212.GA19260@amt.cnet> References: <20120705195607.2e940693.yoshikawa.takuya@oss.ntt.co.jp> <20120705114959.GD15459@redhat.com> <20120705230546.ed314f3d83b5f727492db2f1@gmail.com> <20120712183509.8e04491c.yoshikawa.takuya@oss.ntt.co.jp> <20120718205246.GB18886@amt.cnet> <20120720100434.270f2dca4604063a8789ee5a@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Takuya Yoshikawa , Gleb Natapov , avi@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from mx1.redhat.com ([209.132.183.28]:4953 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751060Ab2GTPF7 (ORCPT ); Fri, 20 Jul 2012 11:05:59 -0400 Content-Disposition: inline In-Reply-To: <20120720100434.270f2dca4604063a8789ee5a@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, Jul 20, 2012 at 10:04:34AM +0900, Takuya Yoshikawa wrote: > On Wed, 18 Jul 2012 17:52:46 -0300 > Marcelo Tosatti wrote: > > > Can't understand, can you please expand more clearly? > > I think mmu pages are not worth freeing under usual memory pressure, > especially when we have EPT/NPT on. > > What's happening: > shrink_slab() vainly calls mmu_shrink() with the default batch size 128, > and mmu_shrink() takes a long time to zap mmu pages far fewer than the > requested number, usually just frees one. Sadly, KVM may recreate the > page soon after that. > > Since we set the seeks 10 times greater than the default, total_scan is > very small and shrink_slab() just wastes time for freeing such small > amount of may-be-reallocated-soon memory: I want it to use time for > scanning other objects instead. > > Actually the total amount of memory used for mmu pages is not huge in > the case of EPT/NPT on: maybe smaller that that of rmap? rmap size is a function of mmu pages, so mmu_shrink indirectly releases rmap also. > So, it's clear that no one wants mmu pages to be freed as other objects. > Sure, our seeks size prevents shrink_slab() from calling mmu_shrink() > usually. But what if administrators want to drop clean caches on the > host? > > Documentation/sysctl/vm.txt says: > Writing to this will cause the kernel to drop clean caches, dentries and > inodes from memory, causing that memory to become free. > > To free pagecache: > echo 1 > /proc/sys/vm/drop_caches > To free dentries and inodes: > echo 2 > /proc/sys/vm/drop_caches > To free pagecache, dentries and inodes: > echo 3 > /proc/sys/vm/drop_caches > > I don't want mmu pages to be freed in such cases. drop_caches should be used in special occasions. I would not worry about it. > So, how about stopping reporting/returning the total number of used > mmu pages to shrink_slab()? > > If we do so, it will think that there are not enough objects to get > memory back from KVM. No, its important to be able to release memory quickly in low memory conditions. I bet the reasoning behind current seeks value (10*default) is close to arbitrary. mmu_shrink can be smarter, by freeing pages which are less likely to be used. IIRC Avi had some nice ideas for LRU-like schemes (search the archives). You can also consider the fact that freeing a higher level pagetable frees all of its children (that is quite dumb actually, sequential shrink passes should free only pages with no children). > In the case of shadow paging, guests can do bad things to allocate > enormous mmu pages, so we should report such exceeded numbers to > shrink_slab() as freeable objects, not the total. A guest idle for 2 months should not have its mmu pages in memory. > |--- needed ---|--- freeable under memory pressure ---| > > We may be able to use n_max_mmu_pages for this: the shrinker tries > to free mmu pages unless the number reaches the goal. > > Thanks, > Takuya > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html