From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Magenheimer Subject: Re: [PATCH v8 1/2] hypervisor: XENMEM_claim_pages (subop of existing) hypercall Date: Wed, 28 Nov 2012 10:05:49 -0800 (PST) Message-ID: References: <50B6479902000078000AC3EB@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <50B6479902000078000AC3EB@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Keir Fraser , Ian Campbell , Konrad Wilk , AndresLagar-Cavilla , Matthew Daley , TimDeegan , xen-devel@lists.xen.org, Zhigang Wang List-Id: xen-devel@lists.xenproject.org > From: Jan Beulich [mailto:JBeulich@suse.com] > Subject: Re: [PATCH v8 1/2] hypervisor: XENMEM_claim_pages (subop of existing) hypercall > > >>> On 28.11.12 at 16:50, Dan Magenheimer wrote: > > --- a/xen/arch/x86/mm.c > > +++ b/xen/arch/x86/mm.c > > @@ -3841,7 +3841,7 @@ int donate_page( > > { > > if ( d->tot_pages >= d->max_pages ) > > goto fail; > > - d->tot_pages++; > > + domain_adjust_tot_pages(d, 1); > > I would generally expect there to always be paired increments > and decrements, yet I fail to spot this one's counterpart. There > is one a few lines down in steal_page(). > > > --- a/xen/common/grant_table.c > > +++ b/xen/common/grant_table.c > > @@ -1656,7 +1656,7 @@ gnttab_transfer( > > } > > > > /* Okay, add the page to 'e'. */ > > - if ( unlikely(e->tot_pages++ == 0) ) > > + if ( unlikely(domain_adjust_tot_pages(e, 1) == 1) ) > > This one's counterpart ought to also be the one in steal_page(), > but did you really check? As that isn't the first one you missed, > what did you do to find them all (apparently you didn't simply > grep for "tot_pages")? Urk. I did grep (-r) many times. I'm embarrassed that this one slipped past me among the dozens of unchanged read-not-modify uses of d->tot_pages. I'm glad for your additional set of eyes. I have now carefully audited to ensure the steal_page one is the last one. Sorry! > > +unsigned long domain_adjust_tot_pages(struct domain *d, long pages) > > +{ > > + long dom_before, dom_after, dom_claimed, sys_before, sys_after; > > + > > + ASSERT(spin_is_locked(&d->page_alloc_lock)); > > + /* > > + * can test d->claimed_pages race-free because it can only change > > + * if d->page_alloc_lock and heap_lock are both held, see also > > + * domain_set_unclaimed_pages below > > + */ > > + if ( !d->unclaimed_pages ) > > + return d->tot_pages += pages; > > + spin_lock(&heap_lock); > > + d->tot_pages += pages; > > Why don't you do this first thing, right after the comment above? > Would remove redundancy. Hmmm... doesn't that create a window where d->tot_pages and d->unclaimed_pages are inconsistent? Hmmm... yes, but this matters only if code where their consistency is required is not also protected by both locks. AFAICT such code doesn't exist so... will do. > Perhaps then even invert the if() > condition and have just a single return point. Rather than invert the if(), unless you prefer the extra level of indentation, I'd prefer: + d->tot_pages += pages; + if ( !d->unclaimed_pages ) + goto out; +out: + return d->tot_pages; Would that be acceptable? > Plus, if you went the route Keir suggested and split out the > conversion to domain_adjust_tot_pages() without anything > else, that would be what could probably go in > uncontroversially, making the remaining amount of changes > quite a bit smaller (and touching a more limited set of files). OK, will do. Thanks again! Dan