From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Christoph Lameter <clameter@sgi.com>,
akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, rientjes@google.com, nacc@us.ibm.com,
kamezawa.hiroyu@jp.fujitsu.com
Subject: Re: [PATCH 6/6] Use one zonelist that is filtered by nodemask
Date: Tue, 20 Nov 2007 10:14:39 -0500 [thread overview]
Message-ID: <1195571680.5041.14.camel@localhost> (raw)
In-Reply-To: <20071120141953.GB32313@csn.ul.ie>
On Tue, 2007-11-20 at 14:19 +0000, Mel Gorman wrote:
> On (09/11/07 07:45), Christoph Lameter didst pronounce:
> > On Fri, 9 Nov 2007, Mel Gorman wrote:
> >
> > > struct page * fastcall
> > > __alloc_pages(gfp_t gfp_mask, unsigned int order,
> > > struct zonelist *zonelist)
> > > {
> > > + /*
> > > + * Use a temporary nodemask for __GFP_THISNODE allocations. If the
> > > + * cost of allocating on the stack or the stack usage becomes
> > > + * noticable, allocate the nodemasks per node at boot or compile time
> > > + */
> > > + if (unlikely(gfp_mask & __GFP_THISNODE)) {
> > > + nodemask_t nodemask;
> >
> > Hmmm.. This places a potentially big structure on the stack. nodemask can
> > contain up to 1024 bits which means 128 bytes. Maybe keep an array of
> > gfp_thisnode nodemasks (node_nodemask?) and use node_nodemask[nid]?
>
> Went back and revisited this. Allocating them at boot-time is below but
> essentially it is a silly and it makes sense to just have two zonelists
> where one of them is for __GFP_THISNODE. Implementation wise, this involves
> dropping the last patch in the set and the overall result is still a reduction
> in the number of zonelists.
Hi, Mel:
I'll try this out [n 24-rc2-mm1 or later]. I have a series with yet
another rework of policy reference counting that will be easier/cleaner
atop your series w/o the external zonelist hung off 'BIND policies. So,
I'm hoping your series goes into -mm "real soon now".
Question: Just wondering why you didn't embed the '_THISNODE nodemask
in the pgdat_t--initialized at boot/node-hot-add time? Perhaps under
#ifdef CONFIG_NUMA. The size is known at build time, right? And
wouldn't that be way smaller than an additional zonelist?
Lee
>
> diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.24-rc2-mm1-040_use_one_zonelist/include/linux/gfp.h linux-2.6.24-rc2-mm1-045_use_static_nodemask/include/linux/gfp.h
> --- linux-2.6.24-rc2-mm1-040_use_one_zonelist/include/linux/gfp.h 2007-11-19 19:27:15.000000000 +0000
> +++ linux-2.6.24-rc2-mm1-045_use_static_nodemask/include/linux/gfp.h 2007-11-19 19:28:55.000000000 +0000
> @@ -175,7 +175,6 @@ FASTCALL(__alloc_pages(gfp_t, unsigned i
> extern struct page *
> FASTCALL(__alloc_pages_nodemask(gfp_t, unsigned int,
> struct zonelist *, nodemask_t *nodemask));
> -extern nodemask_t *nodemask_thisnode(int nid, nodemask_t *nodemask);
>
> static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
> unsigned int order)
> @@ -187,13 +186,10 @@ static inline struct page *alloc_pages_n
> if (nid < 0)
> nid = numa_node_id();
>
> - /* Use a temporary nodemask for __GFP_THISNODE allocations */
> if (unlikely(gfp_mask & __GFP_THISNODE)) {
> - nodemask_t nodemask;
> -
> return __alloc_pages_nodemask(gfp_mask, order,
> node_zonelist(nid),
> - nodemask_thisnode(nid, &nodemask));
> + NODE_DATA(nid)->nodemask_thisnode);
> }
>
> return __alloc_pages(gfp_mask, order, node_zonelist(nid));
> diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.24-rc2-mm1-040_use_one_zonelist/include/linux/mmzone.h linux-2.6.24-rc2-mm1-045_use_static_nodemask/include/linux/mmzone.h
> --- linux-2.6.24-rc2-mm1-040_use_one_zonelist/include/linux/mmzone.h 2007-11-19 19:27:15.000000000 +0000
> +++ linux-2.6.24-rc2-mm1-045_use_static_nodemask/include/linux/mmzone.h 2007-11-19 19:28:55.000000000 +0000
> @@ -519,6 +519,9 @@ typedef struct pglist_data {
> struct zone node_zones[MAX_NR_ZONES];
> struct zonelist node_zonelist;
> int nr_zones;
> +
> + /* nodemask suitable for __GFP_THISNODE */
> + nodemask_t *nodemask_thisnode;
> #ifdef CONFIG_FLAT_NODE_MEM_MAP
> struct page *node_mem_map;
> #endif
> diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.24-rc2-mm1-040_use_one_zonelist/mm/page_alloc.c linux-2.6.24-rc2-mm1-045_use_static_nodemask/mm/page_alloc.c
> --- linux-2.6.24-rc2-mm1-040_use_one_zonelist/mm/page_alloc.c 2007-11-19 19:27:15.000000000 +0000
> +++ linux-2.6.24-rc2-mm1-045_use_static_nodemask/mm/page_alloc.c 2007-11-19 19:28:55.000000000 +0000
> @@ -1695,28 +1695,36 @@ got_pg:
> }
>
> /* Creates a nodemask suitable for GFP_THISNODE allocations */
> -nodemask_t *nodemask_thisnode(int nid, nodemask_t *nodemask)
> +static inline void alloc_node_nodemask_thisnode(pg_data_t *pgdat)
> {
> - nodes_clear(*nodemask);
> - node_set(nid, *nodemask);
> + nodemask_t *nodemask_thisnode;
>
> - return nodemask;
> + /* Only a machine with multiple nodes needs the nodemask */
> + if (!NUMA_BUILD || num_online_nodes() == 1)
> + return;
> +
> + /* Allocate the nodemask. Serious if it fails, but not world ending */
> + nodemask_thisnode = alloc_bootmem_node(pgdat, sizeof(nodemask_t));
> + if (!nodemask_thisnode) {
> + printk(KERN_WARNING
> + "thisnode nodemask allocation failed."
> + "There may be sub-optimal NUMA placement.\n");
> + return;
> + }
> +
> + /* Initialise the nodemask to only cover the current node */
> + nodes_clear(*nodemask_thisnode);
> + node_set(pgdat->node_id, *nodemask_thisnode);
> + pgdat->nodemask_thisnode = nodemask_thisnode;
> }
>
> struct page * fastcall
> __alloc_pages(gfp_t gfp_mask, unsigned int order,
> struct zonelist *zonelist)
> {
> - /*
> - * Use a temporary nodemask for __GFP_THISNODE allocations. If the
> - * cost of allocating on the stack or the stack usage becomes
> - * noticable, allocate the nodemasks per node at boot or compile time
> - */
> if (unlikely(gfp_mask & __GFP_THISNODE)) {
> - nodemask_t nodemask;
> -
> - return __alloc_pages_internal(gfp_mask, order,
> - zonelist, nodemask_thisnode(numa_node_id(), &nodemask));
> + return __alloc_pages_internal(gfp_mask, order, zonelist,
> + NODE_DATA(numa_node_id())->nodemask_thisnode);
> }
>
> return __alloc_pages_internal(gfp_mask, order, zonelist, NULL);
> @@ -3501,6 +3509,7 @@ void __meminit free_area_init_node(int n
> calculate_node_totalpages(pgdat, zones_size, zholes_size);
>
> alloc_node_mem_map(pgdat);
> + alloc_node_nodemask_thisnode(pgdat);
>
> free_area_init_core(pgdat, zones_size, zholes_size);
> }
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-11-20 15:14 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-09 14:32 [PATCH 0/6] Use one zonelist per node instead of multiple zonelists v9 Mel Gorman
2007-11-09 14:32 ` [PATCH 1/6] Use zonelists instead of zones when direct reclaiming pages Mel Gorman
2007-11-09 14:33 ` [PATCH 2/6] Introduce node_zonelist() for accessing the zonelist for a GFP mask Mel Gorman
2007-11-09 15:31 ` Christoph Lameter
2007-11-09 14:33 ` [PATCH 3/6] Use two zonelist that are filtered by " Mel Gorman
2007-11-09 14:33 ` [PATCH 4/6] Have zonelist contains structs with both a zone pointer and zone_idx Mel Gorman
2007-11-20 15:34 ` Lee Schermerhorn
2007-11-09 14:34 ` [PATCH 5/6] Filter based on a nodemask as well as a gfp_mask Mel Gorman
2008-02-29 5:01 ` Paul Jackson
2008-02-29 14:49 ` Lee Schermerhorn
2008-03-04 20:20 ` [PATCH] 2.6.25-rc3-mm1 - Mempolicy - update stale documentation and comments Lee Schermerhorn
2008-03-05 0:35 ` Paul Jackson
2008-03-07 11:53 ` Mel Gorman
2007-11-09 14:34 ` [PATCH 6/6] Use one zonelist that is filtered by nodemask Mel Gorman
2007-11-09 15:45 ` Christoph Lameter
2007-11-09 16:14 ` Mel Gorman
2007-11-09 16:19 ` Christoph Lameter
2007-11-09 16:45 ` Nishanth Aravamudan
2007-11-09 17:18 ` Lee Schermerhorn
2007-11-09 17:26 ` Christoph Lameter
2007-11-09 18:16 ` Nishanth Aravamudan
2007-11-09 18:20 ` Nishanth Aravamudan
2007-11-09 18:22 ` Christoph Lameter
2007-11-11 14:16 ` Mel Gorman
2007-11-12 19:07 ` Christoph Lameter
2007-11-09 18:14 ` Nishanth Aravamudan
2007-11-20 14:19 ` Mel Gorman
2007-11-20 15:14 ` Lee Schermerhorn [this message]
2007-11-20 16:21 ` Mel Gorman
2007-11-20 20:19 ` Christoph Lameter
2007-11-20 20:18 ` Christoph Lameter
2007-11-20 21:26 ` Mel Gorman
2007-11-20 21:33 ` Andrew Morton
2007-11-20 21:38 ` Christoph Lameter
-- strict thread matches above, loose matches on Subject: below --
2007-09-28 14:23 [PATCH 0/6] Use one zonelist per node instead of multiple zonelists v8 Mel Gorman
2007-09-28 14:25 ` [PATCH 6/6] Use one zonelist that is filtered by nodemask Mel Gorman
2007-10-09 1:11 ` Nishanth Aravamudan
2007-10-09 1:56 ` Christoph Lameter
2007-10-09 3:17 ` Nishanth Aravamudan
2007-10-09 15:40 ` Mel Gorman
2007-10-09 16:25 ` Nishanth Aravamudan
2007-10-09 18:47 ` Christoph Lameter
2007-10-09 18:12 ` Nishanth Aravamudan
2007-10-10 15:53 ` Lee Schermerhorn
2007-10-10 16:05 ` Nishanth Aravamudan
2007-10-10 16:09 ` Mel Gorman
2007-09-13 17:52 [PATCH 0/6] Use one zonelist per node instead of multiple zonelists v7 Mel Gorman
2007-09-13 17:54 ` [PATCH 6/6] Use one zonelist that is filtered by nodemask Mel Gorman
2007-09-12 21:04 [PATCH 0/6] Use one zonelist per node instead of multiple zonelists v6 Mel Gorman
2007-09-12 21:06 ` [PATCH 6/6] Use one zonelist that is filtered by nodemask Mel Gorman
2007-09-11 21:30 [PATCH 0/6] Use one zonelist per node instead of multiple zonelists v5 (resend) Mel Gorman
2007-09-11 21:32 ` [PATCH 6/6] Use one zonelist that is filtered by nodemask Mel Gorman
2007-09-11 15:19 [PATCH 0/6] Use one zonelist per node instead of multiple zonelists v5 Mel Gorman
2007-09-11 15:21 ` [PATCH 6/6] Use one zonelist that is filtered by nodemask Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1195571680.5041.14.camel@localhost \
--to=lee.schermerhorn@hp.com \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=nacc@us.ibm.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).