From: Christoph Lameter <clameter@sgi.com>
To: Matthew Wilcox <matthew@wil.cx>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, Pekka Enberg <penberg@cs.helsinki.fi>
Subject: [patch 01/10] SLUB: Consolidate add_partial and add_partial_tail to one function
Date: Sat, 27 Oct 2007 20:31:57 -0700 [thread overview]
Message-ID: <20071028033258.546533164@sgi.com> (raw)
In-Reply-To: 20071028033156.022983073@sgi.com
[-- Attachment #1: slab_defrag_add_partial_tail --]
[-- Type: text/plain, Size: 3601 bytes --]
Add a parameter to add_partial instead of having separate functions.
That allows the detailed control from multiple places when putting
slabs back to the partial list. If we put slabs back to the front
then they are likely immediately used for allocations. If they are
put at the end then we can maximize the time that the partial slabs
spent without allocations.
When deactivating slab we can put the slabs that had remote objects freed
to them at the end of the list so that the cache lines can cool down.
Slabs that had objects from the local cpu freed to them are put in the
front of the list to be reused ASAP in order to exploit the cache hot state.
[This patch is already in mm]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
mm/slub.c | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2007-10-24 08:33:01.000000000 -0700
+++ linux-2.6/mm/slub.c 2007-10-24 09:19:52.000000000 -0700
@@ -1197,19 +1197,15 @@
/*
* Management of partially allocated slabs
*/
-static void add_partial_tail(struct kmem_cache_node *n, struct page *page)
+static void add_partial(struct kmem_cache_node *n,
+ struct page *page, int tail)
{
spin_lock(&n->list_lock);
n->nr_partial++;
- list_add_tail(&page->lru, &n->partial);
- spin_unlock(&n->list_lock);
-}
-
-static void add_partial(struct kmem_cache_node *n, struct page *page)
-{
- spin_lock(&n->list_lock);
- n->nr_partial++;
- list_add(&page->lru, &n->partial);
+ if (tail)
+ list_add_tail(&page->lru, &n->partial);
+ else
+ list_add(&page->lru, &n->partial);
spin_unlock(&n->list_lock);
}
@@ -1337,7 +1333,7 @@
*
* On exit the slab lock will have been dropped.
*/
-static void unfreeze_slab(struct kmem_cache *s, struct page *page)
+static void unfreeze_slab(struct kmem_cache *s, struct page *page, int tail)
{
struct kmem_cache_node *n = get_node(s, page_to_nid(page));
@@ -1345,7 +1341,7 @@
if (page->inuse) {
if (page->freelist)
- add_partial(n, page);
+ add_partial(n, page, tail);
else if (SlabDebug(page) && (s->flags & SLAB_STORE_USER))
add_full(n, page);
slab_unlock(page);
@@ -1360,7 +1356,7 @@
* partial list stays small. kmem_cache_shrink can
* reclaim empty slabs from the partial list.
*/
- add_partial_tail(n, page);
+ add_partial(n, page, 1);
slab_unlock(page);
} else {
slab_unlock(page);
@@ -1375,6 +1371,7 @@
static void deactivate_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
{
struct page *page = c->page;
+ int tail = 1;
/*
* Merge cpu freelist into freelist. Typically we get here
* because both freelists are empty. So this is unlikely
@@ -1383,6 +1380,8 @@
while (unlikely(c->freelist)) {
void **object;
+ tail = 0; /* Hot objects. Put the slab first */
+
/* Retrieve object from cpu_freelist */
object = c->freelist;
c->freelist = c->freelist[c->offset];
@@ -1393,7 +1392,7 @@
page->inuse--;
}
c->page = NULL;
- unfreeze_slab(s, page);
+ unfreeze_slab(s, page, tail);
}
static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
@@ -1633,7 +1632,7 @@
* then add it.
*/
if (unlikely(!prior))
- add_partial(get_node(s, page_to_nid(page)), page);
+ add_partial(get_node(s, page_to_nid(page)), page, 0);
out_unlock:
slab_unlock(page);
@@ -2041,7 +2040,7 @@
#endif
init_kmem_cache_node(n);
atomic_long_inc(&n->nr_slabs);
- add_partial(n, page);
+ add_partial(n, page, 0);
return n;
}
--
next prev parent reply other threads:[~2007-10-28 3:31 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-28 3:31 [patch 00/10] SLUB: SMP regression tests on Dual Xeon E5345 (8p) and new performance patches Christoph Lameter
2007-10-28 3:31 ` Christoph Lameter [this message]
2007-10-28 13:07 ` [patch 01/10] SLUB: Consolidate add_partial and add_partial_tail to one function Pekka J Enberg
2007-10-28 3:31 ` [patch 02/10] SLUB: Noinline some functions to avoid them being folded into alloc/free Christoph Lameter
2007-10-28 13:08 ` Pekka J Enberg
2007-10-29 23:25 ` Matt Mackall
2007-10-28 3:31 ` [patch 03/10] SLUB: Move kmem_cache_node determination into add_full and add_partial Christoph Lameter
2007-10-28 13:09 ` Pekka J Enberg
2007-10-28 3:32 ` [patch 04/10] SLUB: Avoid checking for a valid object before zeroing on the fast path Christoph Lameter
2007-10-28 13:10 ` Pekka J Enberg
2007-10-28 3:32 ` [patch 05/10] SLUB: __slab_alloc() exit path consolidation Christoph Lameter
2007-10-28 13:11 ` Pekka J Enberg
2007-10-28 3:32 ` [patch 06/10] SLUB: Provide unique end marker for each slab Christoph Lameter
2007-10-28 3:32 ` [patch 07/10] SLUB: Avoid referencing kmem_cache structure in __slab_alloc Christoph Lameter
2007-10-28 13:12 ` Pekka J Enberg
2007-10-30 18:38 ` Andrew Morton
2007-10-28 3:32 ` [patch 08/10] SLUB: Optional fast path using cmpxchg_local Christoph Lameter
2007-10-28 13:05 ` Pekka J Enberg
2007-10-29 2:59 ` Christoph Lameter
2007-10-29 3:34 ` Christoph Lameter
2007-10-30 18:30 ` Andrew Morton
2007-10-30 18:49 ` Andrew Morton
2007-10-30 18:58 ` Christoph Lameter
2007-10-30 19:12 ` Mathieu Desnoyers
2007-10-31 1:52 ` [PATCH] local_t Documentation update 2 Mathieu Desnoyers
2007-10-31 2:28 ` [patch 08/10] SLUB: Optional fast path using cmpxchg_local Mathieu Desnoyers
2007-10-28 3:32 ` [patch 09/10] SLUB: Do our own locking via slab_lock and slab_unlock Christoph Lameter
2007-10-28 15:10 ` Pekka J Enberg
2007-10-28 15:14 ` Pekka J Enberg
2007-10-29 3:03 ` Christoph Lameter
2007-10-29 6:30 ` Pekka Enberg
2007-10-30 4:50 ` Nick Piggin
2007-10-30 18:32 ` Christoph Lameter
2007-10-31 1:17 ` Nick Piggin
2007-10-28 3:32 ` [patch 10/10] SLUB: Restructure slab alloc Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071028033258.546533164@sgi.com \
--to=clameter@sgi.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew@wil.cx \
--cc=penberg@cs.helsinki.fi \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).