From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH 4/4] x86/PoD: shorten certain operations on higher order ranges Date: Wed, 23 Sep 2015 18:10:55 +0100 Message-ID: <5602DD1F.4070802@citrix.com> References: <55F70C9A02000078000A2A58@prv-mh.provo.novell.com> <55F7E13902000078000A2B7E@prv-mh.provo.novell.com> <55F7E13902000078000A2B7E@prv-mh.provo.novell.com> <55F7E6BC02000078000A2BDD@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZenZb-0003yW-T4 for xen-devel@lists.xenproject.org; Wed, 23 Sep 2015 17:11:16 +0000 In-Reply-To: <55F7E6BC02000078000A2BDD@prv-mh.provo.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich , xen-devel Cc: George Dunlap , Andrew Cooper , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 09/15/2015 08:37 AM, Jan Beulich wrote: > @@ -574,41 +576,46 @@ recount: > * + There are PoD entries to handle, or > * + There is ram left, and we want to steal it > */ > - for ( i=0; > - i<(1<0 || (steal_for_cache && ram > 0)); > - i++) > + for ( i = 0; > + i < (1UL << order) && (pod > 0 || (steal_for_cache && ram > 0)); > + i += n ) > { > mfn_t mfn; > p2m_type_t t; > p2m_access_t a; > + unsigned int cur_order; > > - mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, NULL, NULL); > + mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL); > + if ( order < cur_order ) > + cur_order = order; > + n = 1UL << cur_order; > if ( t == p2m_populate_on_demand ) > { > - p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid, > - p2m->default_access); > - p2m->pod.entry_count--; > + p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order, > + p2m_invalid, p2m->default_access); > + p2m->pod.entry_count -= n; > BUG_ON(p2m->pod.entry_count < 0); > - pod--; > + pod -= n; > } > else if ( steal_for_cache && p2m_is_ram(t) ) > { > struct page_info *page; > + unsigned int j; > > ASSERT(mfn_valid(mfn)); > > page = mfn_to_page(mfn); > > - p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid, > - p2m->default_access); > - set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY); > - > - p2m_pod_cache_add(p2m, page, 0); > + p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), cur_order, > + p2m_invalid, p2m->default_access); > + for ( j = 0; j < n; ++j ) > + set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY); > + p2m_pod_cache_add(p2m, page, cur_order); > > steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); This code will now steal an entire 2MiB page even if we only need a few individual pages, then free it below (calling p2m_pod_set_cache_target()). Upon reflection, this is actually a feature -- if we have singleton pages in the cache, we can free those first, hopefully keeping the 2MiB page together. It would be good to have a comment here to that effect, though; perhaps: "If we need less than 1<pod.entry_count - pod > p2m->pod.count) -- i.e., we should steal if liabilities would be greater than assets *after* this function completed, not before. Otherwise, everything else looks good, thanks. I assume you'll resubmit this when the patch it depends on leaves RFC? In which case I'll wait until I see that version to Ack it. -George