From: Christoph Lameter <cl@linux-foundation.org>
To: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: linux-mm@kvack.org, Tejun Heo <tj@kernel.org>,
David Rientjes <rientjes@google.com>
Subject: [S+Q Cleanup3 3/6] slub: Remove static kmem_cache_cpu array for boot
Date: Thu, 19 Aug 2010 15:33:27 -0500 [thread overview]
Message-ID: <20100819203438.175241114@linux.com> (raw)
In-Reply-To: 20100819203324.549566024@linux.com
[-- Attachment #1: maybe_remove_static --]
[-- Type: text/plain, Size: 1586 bytes --]
The percpu allocator can now handle allocations during early boot.
So drop the static kmem_cache_cpu array.
Cc: Tejun Heo <tj@kernel.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
mm/slub.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-18 09:41:00.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-18 09:55:20.000000000 -0500
@@ -2062,23 +2062,14 @@ init_kmem_cache_node(struct kmem_cache_n
#endif
}
-static DEFINE_PER_CPU(struct kmem_cache_cpu, kmalloc_percpu[KMALLOC_CACHES]);
-
static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
{
- if (s < kmalloc_caches + KMALLOC_CACHES && s >= kmalloc_caches)
- /*
- * Boot time creation of the kmalloc array. Use static per cpu data
- * since the per cpu allocator is not available yet.
- */
- s->cpu_slab = kmalloc_percpu + (s - kmalloc_caches);
- else
- s->cpu_slab = alloc_percpu(struct kmem_cache_cpu);
+ BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
+ SLUB_PAGE_SHIFT * sizeof(struct kmem_cache_cpu));
- if (!s->cpu_slab)
- return 0;
+ s->cpu_slab = alloc_percpu(struct kmem_cache_cpu);
- return 1;
+ return s->cpu_slab != NULL;
}
#ifdef CONFIG_NUMA
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-08-19 20:34 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-19 20:33 [S+Q Cleanup3 0/6] SLUB: Cleanups V3 Christoph Lameter
2010-08-19 20:33 ` [S+Q Cleanup3 1/6] Slub: Force no inlining of debug functions Christoph Lameter
2010-08-19 20:33 ` [S+Q Cleanup3 2/6] slub: remove dynamic dma slab allocation Christoph Lameter
2010-08-19 20:33 ` Christoph Lameter [this message]
2010-08-19 20:33 ` [S+Q Cleanup3 4/6] slub: Dynamically size kmalloc cache allocations Christoph Lameter
2010-08-19 21:21 ` David Rientjes
2010-08-19 21:31 ` Christoph Lameter
2010-08-19 23:01 ` David Rientjes
2010-08-19 23:20 ` Christoph Lameter
2010-08-19 23:39 ` David Rientjes
2010-08-20 17:08 ` Christoph Lameter
2010-08-20 17:32 ` Christoph Lameter
2010-09-28 10:06 ` David Rientjes
2010-09-28 12:42 ` Christoph Lameter
2010-08-19 20:33 ` [S+Q Cleanup3 5/6] slub: Extract hooks for memory checkers from hotpaths Christoph Lameter
2010-08-19 21:02 ` David Rientjes
2010-08-19 20:33 ` [S+Q Cleanup3 6/6] slub: Move gfpflag masking out of the hotpath Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100819203438.175241114@linux.com \
--to=cl@linux-foundation.org \
--cc=linux-mm@kvack.org \
--cc=penberg@cs.helsinki.fi \
--cc=rientjes@google.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).