From: Christoph Lameter <cl@linux.com>
To: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Subject: [slubllv4 01/16] slub: Per object NUMA support
Date: Fri, 06 May 2011 13:05:42 -0500 [thread overview]
Message-ID: <20110506180654.663421389@linux.com> (raw)
In-Reply-To: 20110506180541.990069206@linux.com
[-- Attachment #1: rr_slabs --]
[-- Type: text/plain, Size: 1549 bytes --]
Currently slub applies NUMA policies per allocated slab page. Change
that to apply memory policies for each individual object allocated.
F.e. before this patch MPOL_INTERLEAVE would return objects from the
same slab page until a new slab page was allocated. Now an object
from a different page is taken for each allocation.
This increases the overhead of the fastpath under NUMA.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-05-05 15:21:51.000000000 -0500
+++ linux-2.6/mm/slub.c 2011-05-05 15:28:33.000000000 -0500
@@ -1873,6 +1873,21 @@ debug:
goto unlock_out;
}
+static __always_inline int alternate_slab_node(struct kmem_cache *s,
+ gfp_t flags, int node)
+{
+#ifdef CONFIG_NUMA
+ if (unlikely(node == NUMA_NO_NODE &&
+ !(flags & __GFP_THISNODE) &&
+ !in_interrupt())) {
+ if ((s->flags & SLAB_MEM_SPREAD) && cpuset_do_slab_mem_spread())
+ node = cpuset_slab_spread_node();
+ else if (current->mempolicy)
+ node = slab_node(current->mempolicy);
+ }
+#endif
+ return node;
+}
/*
* Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
* have the fastpath folded into their functions. So no function call
@@ -1893,6 +1908,8 @@ static __always_inline void *slab_alloc(
if (slab_pre_alloc_hook(s, gfpflags))
return NULL;
+ node = alternate_slab_node(s, gfpflags, node);
+
redo:
/*
next prev parent reply other threads:[~2011-05-06 18:06 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-06 18:05 [slubllv4 00/16] SLUB: Lockless freelists for objects V4 Christoph Lameter
2011-05-06 18:05 ` Christoph Lameter [this message]
2011-05-06 18:05 ` [slubllv4 02/16] slub: Do not use frozen page flag but a bit in the page counters Christoph Lameter
2011-05-06 18:05 ` [slubllv4 03/16] slub: Move page->frozen handling near where the page->freelist handling occurs Christoph Lameter
2011-05-06 18:05 ` [slubllv4 04/16] x86: Add support for cmpxchg_double Christoph Lameter
2011-05-06 18:05 ` [slubllv4 05/16] mm: Rearrange struct page Christoph Lameter
2011-05-06 18:05 ` [slubllv4 06/16] slub: Add cmpxchg_double_slab() Christoph Lameter
2011-05-06 18:05 ` [slubllv4 07/16] slub: explicit list_lock taking Christoph Lameter
2011-05-06 18:05 ` [slubllv4 08/16] slub: Pass kmem_cache struct to lock and freeze slab Christoph Lameter
2011-05-06 18:05 ` [slubllv4 09/16] slub: Rework allocator fastpaths Christoph Lameter
2011-05-06 18:05 ` [slubllv4 10/16] slub: Invert locking and avoid slab lock Christoph Lameter
2011-05-06 18:05 ` [slubllv4 11/16] slub: Disable interrupts in free_debug processing Christoph Lameter
2011-05-06 18:05 ` [slubllv4 12/16] slub: Avoid disabling interrupts in free slowpath Christoph Lameter
2011-05-06 18:05 ` [slubllv4 13/16] slub: Get rid of the another_slab label Christoph Lameter
2011-05-06 18:05 ` [slubllv4 14/16] slub: fast release on full slab Christoph Lameter
2011-05-06 18:05 ` [slubllv4 15/16] slub: Not necessary to check for empty slab on load_freelist Christoph Lameter
2011-05-06 18:05 ` [slubllv4 16/16] slub: update statistics for cmpxchg handling Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110506180654.663421389@linux.com \
--to=cl@linux.com \
--cc=penberg@cs.helsinki.fi \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox