From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qa0-f46.google.com (mail-qa0-f46.google.com [209.85.216.46]) by kanga.kvack.org (Postfix) with ESMTP id B619D6B0031 for ; Tue, 18 Feb 2014 12:22:20 -0500 (EST) Received: by mail-qa0-f46.google.com with SMTP id k15so10354090qaq.19 for ; Tue, 18 Feb 2014 09:22:20 -0800 (PST) Received: from e7.ny.us.ibm.com (e7.ny.us.ibm.com. [32.97.182.137]) by mx.google.com with ESMTPS id fy9si10750997qab.5.2014.02.18.09.22.19 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 18 Feb 2014 09:22:19 -0800 (PST) Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 18 Feb 2014 12:22:19 -0500 Received: from b01cxnp22033.gho.pok.ibm.com (b01cxnp22033.gho.pok.ibm.com [9.57.198.23]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 65E17C90041 for ; Tue, 18 Feb 2014 12:22:12 -0500 (EST) Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by b01cxnp22033.gho.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s1IHMF499961924 for ; Tue, 18 Feb 2014 17:22:15 GMT Received: from d01av04.pok.ibm.com (localhost [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s1IHMESc002820 for ; Tue, 18 Feb 2014 12:22:14 -0500 Date: Tue, 18 Feb 2014 09:22:10 -0800 From: Nishanth Aravamudan Subject: Re: [RFC PATCH 2/3] topology: support node_numa_mem() for determining the fallback node Message-ID: <20140218172209.GC31998@linux.vnet.ibm.com> References: <1391674026-20092-2-git-send-email-iamjoonsoo.kim@lge.com> <20140207054819.GC28952@lge.com> <20140210191321.GD1558@linux.vnet.ibm.com> <20140211074159.GB27870@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org List-ID: To: Christoph Lameter Cc: Joonsoo Kim , David Rientjes , Han Pingtian , Pekka Enberg , Linux Memory Management List , Paul Mackerras , Anton Blanchard , Matt Mackall , linuxppc-dev@lists.ozlabs.org, Wanpeng Li On 12.02.2014 [16:16:11 -0600], Christoph Lameter wrote: > Here is another patch with some fixes. The additional logic is only > compiled in if CONFIG_HAVE_MEMORYLESS_NODES is set. > > Subject: slub: Memoryless node support > > Support memoryless nodes by tracking which allocations are failing. > Allocations targeted to the nodes without memory fall back to the > current available per cpu objects and if that is not available will > create a new slab using the page allocator to fallback from the > memoryless node to some other node. > > Signed-off-by: Christoph Lameter Tested-by: Nishanth Aravamudan Acked-by: Nishanth Aravamudan > Index: linux/mm/slub.c > =================================================================== > --- linux.orig/mm/slub.c 2014-02-12 16:07:48.957869570 -0600 > +++ linux/mm/slub.c 2014-02-12 16:09:22.198928260 -0600 > @@ -134,6 +134,10 @@ static inline bool kmem_cache_has_cpu_pa > #endif > } > > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > +static nodemask_t empty_nodes; > +#endif > + > /* > * Issues still to be resolved: > * > @@ -1405,16 +1409,28 @@ static struct page *new_slab(struct kmem > void *last; > void *p; > int order; > + int alloc_node; > > BUG_ON(flags & GFP_SLAB_BUG_MASK); > > page = allocate_slab(s, > flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node); > - if (!page) > + if (!page) { > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > + if (node != NUMA_NO_NODE) > + node_set(node, empty_nodes); > +#endif > goto out; > + } > > order = compound_order(page); > - inc_slabs_node(s, page_to_nid(page), page->objects); > + alloc_node = page_to_nid(page); > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > + node_clear(alloc_node, empty_nodes); > + if (node != NUMA_NO_NODE && alloc_node != node) > + node_set(node, empty_nodes); > +#endif > + inc_slabs_node(s, alloc_node, page->objects); > memcg_bind_pages(s, order); > page->slab_cache = s; > __SetPageSlab(page); > @@ -1722,7 +1738,7 @@ static void *get_partial(struct kmem_cac > struct kmem_cache_cpu *c) > { > void *object; > - int searchnode = (node == NUMA_NO_NODE) ? numa_node_id() : node; > + int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node; > > object = get_partial_node(s, get_node(s, searchnode), c, flags); > if (object || node != NUMA_NO_NODE) > @@ -2117,8 +2133,19 @@ static void flush_all(struct kmem_cache > static inline int node_match(struct page *page, int node) > { > #ifdef CONFIG_NUMA > - if (!page || (node != NUMA_NO_NODE && page_to_nid(page) != node)) > + int page_node = page_to_nid(page); > + > + if (!page) > return 0; > + > + if (node != NUMA_NO_NODE) { > +#ifdef CONFIG_HAVE_MEMORYLESS_NODES > + if (node_isset(node, empty_nodes)) > + return 1; > +#endif > + if (page_node != node) > + return 0; > + } > #endif > return 1; > } > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org