From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: [PATCH 4 of 4] xen, pod: Try to reclaim superpages when ballooning down Date: Wed, 27 Jun 2012 17:57:31 +0100 Message-ID: <71a22d6d940f27d8dfbc.1340816251@elijah> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xensource.com Cc: george.dunlap@eu.citrix.com List-Id: xen-devel@lists.xenproject.org # HG changeset patch # User George Dunlap # Date 1340815812 -3600 # Node ID 71a22d6d940f27d8dfbcfc12d1377e4622f981bd # Parent c71f52608fd8867062cc40a1354305f2af17b2c3 xen,pod: Try to reclaim superpages when ballooning down Windows balloon drivers can typically only get 4k pages from the kernel, and so hand them back at that level. Try to regain superpages by checking the superpage frame that the 4k page is in to see if we can reclaim the whole thing for the PoD cache. This also modifies p2m_pod_zero_check_superpage() to return SUPERPAGE_PAGES on success. v2: - Rewritten to simply to the check as in demand-fault case, without needing to know that the p2m entry is a superpage. - Also, took out the re-writing of the reclaim loop, leaving it optimized for 4k pages (by far the most common case), and simplifying the patch. diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -488,6 +488,10 @@ p2m_pod_offline_or_broken_replace(struct return; } +static int +p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn); + + /* This function is needed for two reasons: * + To properly handle clearing of PoD entries * + To "steal back" memory being freed for the PoD cache, rather than @@ -505,8 +509,8 @@ p2m_pod_decrease_reservation(struct doma int i; struct p2m_domain *p2m = p2m_get_hostp2m(d); - int steal_for_cache = 0; - int pod = 0, nonpod = 0, ram = 0; + int steal_for_cache; + int pod, nonpod, ram; gfn_lock(p2m, gpfn, order); pod_lock(p2m); @@ -516,13 +520,15 @@ p2m_pod_decrease_reservation(struct doma if ( p2m->pod.entry_count == 0 ) goto out_unlock; + if ( unlikely(d->is_dying) ) + goto out_unlock; + +recount: + pod = nonpod = ram = 0; + /* Figure out if we need to steal some freed memory for our cache */ steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); - if ( unlikely(d->is_dying) ) - goto out_unlock; - - /* See what's in here. */ /* FIXME: Add contiguous; query for PSE entries? */ for ( i=0; i<(1<pod.entry_count += SUPERPAGE_PAGES; + ret = SUPERPAGE_PAGES; + out_reset: if ( reset ) set_p2m_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access);