public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Hugh Dickins <hugh@veritas.com>
Cc: Joe Jin <joe.jin@oracle.com>,
	bill.irwin@oracle.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Add nid sanity on alloc_pages_node
Date: Tue, 17 Jul 2007 11:58:49 -0700	[thread overview]
Message-ID: <20070717115849.9f5e435c.akpm@linux-foundation.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0707171818080.19489@blonde.wat.veritas.com>

On Tue, 17 Jul 2007 18:26:14 +0100 (BST) Hugh Dickins <hugh@veritas.com> wrote:

> On Tue, 17 Jul 2007, Andrew Morton wrote:
> > On Tue, 17 Jul 2007 16:04:54 +0100 (BST) Hugh Dickins <hugh@veritas.com> wrote:
> > > On Thu, 12 Jul 2007, Andrew Morton wrote:
> > > > 
> > > > It'd be much better to fix the race within alloc_fresh_huge_page().  That
> > > > function is pretty pathetic.
> > > > 
> > > > Something like this?
> > > > 
> > > > --- a/mm/hugetlb.c~a
> > > > +++ a/mm/hugetlb.c
> > > > @@ -105,13 +105,20 @@ static void free_huge_page(struct page *
> > > >  
> > > >  static int alloc_fresh_huge_page(void)
> > > >  {
> > > > -	static int nid = 0;
> > > > +	static int prev_nid;
> > > > +	static DEFINE_SPINLOCK(nid_lock);
> > > >  	struct page *page;
> > > > -	page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
> > > > -					HUGETLB_PAGE_ORDER);
> > > > -	nid = next_node(nid, node_online_map);
> > > > +	int nid;
> > > > +
> > > > +	spin_lock(&nid_lock);
> > > > +	nid = next_node(prev_nid, node_online_map);
> > > >  	if (nid == MAX_NUMNODES)
> > > >  		nid = first_node(node_online_map);
> > > > +	prev_nid = nid;
> > > > +	spin_unlock(&nid_lock);
> > > > +
> > > > +	page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
> > > > +					HUGETLB_PAGE_ORDER);
> > > >  	if (page) {
> > > >  		set_compound_page_dtor(page, free_huge_page);
> > > >  		spin_lock(&hugetlb_lock);
> > > 
> > > Now that it's gone into the tree, I look at it and wonder, does your
> > > nid_lock really serve any purpose?  We're just doing a simple assignment
> > > to prev_nid, and it doesn't matter if occasionally two racers choose the
> > > same node, and there's no protection here against a node being offlined
> > > before the alloc_pages_node anyway (unsupported? I'm ignorant).
> > 
> > umm, actually, yes, the code as it happens to be structured does mean that
> > ther is no longer a way in which a race can cause us to pass MAX_NUMNODES
> > into alloc_pages_node().
> > 
> > Or not.  We can call next_node(MAX_NUMNODES, node_online_map) in that race
> > window, with perhaps bad results.
> > 
> > I think I like the lock ;)
> 
> I hate to waste your time, but I'm still puzzled.  Wasn't the race fixed
> by your changeover from use of "static int nid" throughout, to setting
> local "int nid" from "static int prev_nid", working with nid, then
> setting prev_nid from nid at the end?  What does the lock add to that?
> 

There are still minor races without the lock - two CPUs will allocate
from the first node, and prev_nid can occasionally go backwards.

I agree that they are sufficiently minor that we could remove the lock.



  reply	other threads:[~2007-07-17 19:00 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-07-13  2:45 [PATCH] Add nid sanity on alloc_pages_node Joe Jin
2007-07-13  5:18 ` Andrew Morton
2007-07-13  6:40   ` Joe Jin
2007-07-13  6:49     ` Andrew Morton
2007-07-13  6:57       ` Andrew Morton
2007-07-13  8:29         ` Paul Jackson
2007-07-13  8:38           ` Andrew Morton
2007-07-13  8:43             ` Paul Jackson
2007-07-13  8:49               ` Andrew Morton
2007-07-13  8:54                 ` Paul Jackson
2007-07-13 12:48                   ` Benjamin Herrenschmidt
2007-07-13  8:03       ` Joe Jin
2007-07-13  8:15         ` Andrew Morton
2007-07-13 12:18           ` Joe Jin
2007-07-13 12:42             ` Paul Jackson
2007-07-14 17:40             ` Nish Aravamudan
2007-07-14 18:04               ` Andrew Morton
2007-07-14 20:47                 ` Nish Aravamudan
2007-07-13  8:04       ` gurudas pai
2007-07-13  8:19         ` Andrew Morton
2007-07-13 12:37           ` gurudas pai
2007-07-13  8:37       ` Joe Jin
2007-07-13  8:44         ` Andrew Morton
2007-07-17 15:04   ` Hugh Dickins
2007-07-17 16:32     ` Andrew Morton
2007-07-17 17:26       ` Hugh Dickins
2007-07-17 18:58         ` Andrew Morton [this message]
2007-07-17 19:49           ` Hugh Dickins
2007-07-17 20:01             ` Andrew Morton
2007-07-17 20:35               ` Hugh Dickins
2007-07-18  1:40                 ` Joe Jin
2007-07-18  4:49                   ` Hugh Dickins
2007-07-18  5:45                     ` Andrew Morton
2007-07-18  7:34                       ` Joe Jin
2007-07-18  6:32                     ` Joe Jin
2007-07-18  8:09                     ` Joe Jin
2007-07-18  8:35                       ` Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070717115849.9f5e435c.akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bill.irwin@oracle.com \
    --cc=hugh@veritas.com \
    --cc=joe.jin@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox