From: Christoph Lameter <cl@linux.com>
To: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: linux-kernel@vger.kernel.org
Subject: [slubllv3 17/21] slub: Avoid disabling interrupts in free slowpath
Date: Fri, 15 Apr 2011 15:47:47 -0500 [thread overview]
Message-ID: <20110415204755.934341102@linux.com> (raw)
In-Reply-To: 20110415204730.326790555@linux.com
[-- Attachment #1: slab_free_without_irqoff --]
[-- Type: text/plain, Size: 2428 bytes --]
Disabling interrupts can be avoided now. However, list operation still require
disabling interrupts since allocations can occur from interrupt
contexts and there is no way to perform atomic list operations. So
acquire the list lock opportunistically if there is a chance
that list operations would be needed. This may result in
needless synchronizations but allows the avoidance of synchronization
in the majority of the cases.
Dropping interrupt handling significantly simplifies the slowpath.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-04-15 14:30:05.000000000 -0500
+++ linux-2.6/mm/slub.c 2011-04-15 14:30:06.000000000 -0500
@@ -2225,13 +2225,11 @@ static void __slab_free(struct kmem_cach
struct kmem_cache_node *n = NULL;
#ifdef CONFIG_CMPXCHG_LOCAL
unsigned long flags;
-
- local_irq_save(flags);
#endif
stat(s, FREE_SLOWPATH);
if (kmem_cache_debug(s) && !free_debug_processing(s, page, x, addr))
- goto out_unlock;
+ return;
do {
prior = page->freelist;
@@ -2250,7 +2248,11 @@ static void __slab_free(struct kmem_cach
* Otherwise the list_lock will synchronize with
* other processors updating the list of slabs.
*/
+#ifdef CONFIG_CMPXCHG_LOCAL
+ spin_lock_irqsave(&n->list_lock, flags);
+#else
spin_lock(&n->list_lock);
+#endif
}
inuse = new.inuse;
@@ -2266,7 +2268,7 @@ static void __slab_free(struct kmem_cach
*/
if (was_frozen)
stat(s, FREE_FROZEN);
- goto out_unlock;
+ return;
}
/*
@@ -2289,12 +2291,10 @@ static void __slab_free(struct kmem_cach
stat(s, FREE_ADD_PARTIAL);
}
}
-
- spin_unlock(&n->list_lock);
-
-out_unlock:
#ifdef CONFIG_CMPXCHG_LOCAL
- local_irq_restore(flags);
+ spin_unlock_irqrestore(&n->list_lock, flags);
+#else
+ spin_unlock(&n->list_lock);
#endif
return;
@@ -2307,9 +2307,10 @@ slab_empty:
stat(s, FREE_REMOVE_PARTIAL);
}
- spin_unlock(&n->list_lock);
#ifdef CONFIG_CMPXCHG_LOCAL
- local_irq_restore(flags);
+ spin_unlock_irqrestore(&n->list_lock, flags);
+#else
+ spin_unlock(&n->list_lock);
#endif
stat(s, FREE_SLAB);
discard_slab(s, page);
next prev parent reply other threads:[~2011-04-15 20:48 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-15 20:47 [slubllv3 00/21] SLUB: Lockless freelists for objects V3 Christoph Lameter
2011-04-15 20:47 ` [slubllv3 01/21] slub: Use NUMA_NO_NODE in get_partial Christoph Lameter
2011-04-15 20:47 ` [slubllv3 02/21] slub: get_map() function to establish map of free objects in a slab Christoph Lameter
2011-04-15 20:47 ` [slubllv3 03/21] slub: Eliminate repeated use of c->page through a new page variable Christoph Lameter
2011-04-15 20:47 ` [slubllv3 04/21] slub: Move node determination out of hotpath Christoph Lameter
2011-04-15 20:47 ` [slubllv3 05/21] slub: Move debug handlign in __slab_free Christoph Lameter
2011-04-15 20:47 ` [slubllv3 06/21] slub: Per object NUMA support Christoph Lameter
2011-04-15 20:47 ` [slubllv3 07/21] slub: Do not use frozen page flag but a bit in the page counters Christoph Lameter
2011-04-15 20:47 ` [slubllv3 08/21] slub: Move page->frozen handling near where the page->freelist handling occurs Christoph Lameter
2011-04-15 20:47 ` [slubllv3 09/21] x86: Add support for cmpxchg_double Christoph Lameter
2011-04-15 20:47 ` [slubllv3 10/21] mm: Rearrange struct page Christoph Lameter
2011-04-15 20:47 ` [slubllv3 11/21] slub: Add cmpxchg_double_slab() Christoph Lameter
2011-04-15 20:47 ` [slubllv3 12/21] slub: explicit list_lock taking Christoph Lameter
2011-04-15 20:47 ` [slubllv3 13/21] slub: Pass kmem_cache struct to lock and freeze slab Christoph Lameter
2011-04-15 20:47 ` [slubllv3 14/21] slub: Rework allocator fastpaths Christoph Lameter
2011-04-15 20:47 ` [slubllv3 15/21] slub: Invert locking and avoid slab lock Christoph Lameter
2011-04-15 20:47 ` [slubllv3 16/21] slub: Disable interrupts in free_debug processing Christoph Lameter
2011-04-15 20:47 ` Christoph Lameter [this message]
2011-04-15 20:47 ` [slubllv3 18/21] slub: Get rid of the another_slab label Christoph Lameter
2011-04-15 20:47 ` [slubllv3 19/21] slub: fast release on full slab Christoph Lameter
2011-04-15 20:47 ` [slubllv3 20/21] slub: Not necessary to check for empty slab on load_freelist Christoph Lameter
2011-04-15 20:47 ` [slubllv3 21/21] slub: update statistics for cmpxchg handling Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110415204755.934341102@linux.com \
--to=cl@linux.com \
--cc=penberg@cs.helsinki.fi \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).