From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e9.ny.us.ibm.com (e9.ny.us.ibm.com [32.97.182.139]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 9F68B2C00D5 for ; Wed, 19 Feb 2014 04:22:23 +1100 (EST) Received: from /spool/local by e9.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 18 Feb 2014 12:22:19 -0500 Received: from b01cxnp22036.gho.pok.ibm.com (b01cxnp22036.gho.pok.ibm.com [9.57.198.26]) by d01dlp02.pok.ibm.com (Postfix) with ESMTP id DBFD96E8047 for ; Tue, 18 Feb 2014 12:22:10 -0500 (EST) Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by b01cxnp22036.gho.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s1IHMFW85570844 for ; Tue, 18 Feb 2014 17:22:15 GMT Received: from d01av04.pok.ibm.com (localhost [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s1IHMESk002820 for ; Tue, 18 Feb 2014 12:22:15 -0500 Date: Tue, 18 Feb 2014 09:22:10 -0800 From: Nishanth Aravamudan To: Christoph Lameter Subject: Re: [RFC PATCH 2/3] topology: support node_numa_mem() for determining the fallback node Message-ID: <20140218172209.GC31998@linux.vnet.ibm.com> References: <1391674026-20092-2-git-send-email-iamjoonsoo.kim@lge.com> <20140207054819.GC28952@lge.com> <20140210191321.GD1558@linux.vnet.ibm.com> <20140211074159.GB27870@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: Cc: Han Pingtian , Matt Mackall , Pekka Enberg , Linux Memory Management List , Paul Mackerras , Anton Blanchard , David Rientjes , Joonsoo Kim , linuxppc-dev@lists.ozlabs.org, Wanpeng Li List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On 12.02.2014 [16:16:11 -0600], Christoph Lameter wrote: > Here is another patch with some fixes. The additional logic is only > compiled in if CONFIG_HAVE_MEMORYLESS_NODES is set. > > Subject: slub: Memoryless node support > > Support memoryless nodes by tracking which allocations are failing. > Allocations targeted to the nodes without memory fall back to the > current available per cpu objects and if that is not available will > create a new slab using the page allocator to fallback from the > memoryless node to some other node. > > Signed-off-by: Christoph Lameter Tested-by: Nishanth Aravamudan Acked-by: Nishanth Aravamudan > Index: linux/mm/slub.c > =================================================================== > --- linux.orig/mm/slub.c 2014-02-12 16:07:48.957869570 -0600 > +++ linux/mm/slub.c 2014-02-12 16:09:22.198928260 -0600 > @@ -134,6 +134,10 @@ static inline bool kmem_cache_has_cpu_pa > #endif > } > > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > +static nodemask_t empty_nodes; > +#endif > + > /* > * Issues still to be resolved: > * > @@ -1405,16 +1409,28 @@ static struct page *new_slab(struct kmem > void *last; > void *p; > int order; > + int alloc_node; > > BUG_ON(flags & GFP_SLAB_BUG_MASK); > > page = allocate_slab(s, > flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); > - if (!page) > + if (!page) { > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > + if (node != NUMA_NO_NODE) > + node_set(node, empty_nodes); > +#endif > goto out; > + } > > order = compound_order(page); > - inc_slabs_node(s, page_to_nid(page), page->objects); > + alloc_node = page_to_nid(page); > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > + node_clear(alloc_node, empty_nodes); > + if (node != NUMA_NO_NODE && alloc_node != node) > + node_set(node, empty_nodes); > +#endif > + inc_slabs_node(s, alloc_node, page->objects); > memcg_bind_pages(s, order); > page->slab_cache = s; > __SetPageSlab(page); > @@ -1722,7 +1738,7 @@ static void *get_partial(struct kmem_cac > struct kmem_cache_cpu *c) > { > void *object; > - int searchnode = (node == NUMA_NO_NODE) ? numa_node_id() : node; > + int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node; > > object = get_partial_node(s, get_node(s, searchnode), c, flags); > if (object || node != NUMA_NO_NODE) > @@ -2117,8 +2133,19 @@ static void flush_all(struct kmem_cache > static inline int node_match(struct page *page, int node) > { > #ifdef CONFIG_NUMA > - if (!page || (node != NUMA_NO_NODE && page_to_nid(page) != node)) > + int page_node = page_to_nid(page); > + > + if (!page) > return 0; > + > + if (node != NUMA_NO_NODE) { > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > + if (node_isset(node, empty_nodes)) > + return 1; > +#endif > + if (page_node != node) > + return 0; > + } > #endif > return 1; > } >