* [S+Q Cleanup4 0/6] SLUB: Cleanups V4
@ 2010-08-20 17:37 Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 1/6] Slub: Force no inlining of debug functions Christoph Lameter
` (7 more replies)
0 siblings, 8 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, David Rientjes
V1->V2: Fixes as discussed with David.
V2->V3: More deeper fixes. Return pointer to kmem_cache from create_kmalloc_cache.
V3->V6: Some missing final touches
These are just the 6 remaining cleanup patches (after the 2.6.36 merge
got the other in) in preparation for the Unified patches.
I think it may be best to first try to merge these and make sure that
they are fine before we go step by step through the unification patches.
I hope they can go into -next.
Patch 1
Uninline debug functions in hot paths. There is no point of the compiler
folding them in because they are typically unused.
Patch 2
Remove dynamic creation of DMA caches and create them statically
(will be turned dynamic by patch 4 but will then always be preallocated
on boot and not from the hotpath)
Patch 3
Remove static allocation of kmem_cache_cpu array and rely on the
percpu allocator to allocate memory for the array on bootup.
Patch 4
Remove static allocation of kmem_cache structure for kmalloc and friends.
Patch 5
Extract hooks for memory checkers.
Patch 6
Move gfpflag masking out of the allocator hotpath
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [S+Q Cleanup4 1/6] Slub: Force no inlining of debug functions
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
@ 2010-08-20 17:37 ` Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 2/6] slub: remove dynamic dma slab allocation Christoph Lameter
` (6 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, David Rientjes
[-- Attachment #1: slub_nolinline --]
[-- Type: text/plain, Size: 1407 bytes --]
Compiler folds the debgging functions into the critical paths.
Avoid that by adding noinline to the functions that check for
problems.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-19 14:13:02.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-19 14:13:05.000000000 -0500
@@ -862,7 +862,7 @@ static void setup_object_debug(struct km
init_tracking(s, object);
}
-static int alloc_debug_processing(struct kmem_cache *s, struct page *page,
+static noinline int alloc_debug_processing(struct kmem_cache *s, struct page *page,
void *object, unsigned long addr)
{
if (!check_slab(s, page))
@@ -902,8 +902,8 @@ bad:
return 0;
}
-static int free_debug_processing(struct kmem_cache *s, struct page *page,
- void *object, unsigned long addr)
+static noinline int free_debug_processing(struct kmem_cache *s,
+ struct page *page, void *object, unsigned long addr)
{
if (!check_slab(s, page))
goto fail;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [S+Q Cleanup4 2/6] slub: remove dynamic dma slab allocation
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 1/6] Slub: Force no inlining of debug functions Christoph Lameter
@ 2010-08-20 17:37 ` Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 3/6] slub: Remove static kmem_cache_cpu array for boot Christoph Lameter
` (5 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, David Rientjes
[-- Attachment #1: slub_remove_dynamic_dma --]
[-- Type: text/plain, Size: 8857 bytes --]
Remove the dynamic dma slab allocation since this causes too many issues with
nested locks etc etc. The change avoids passing gfpflags into many functions.
V3->V4:
- Create dma caches in kmem_cache_init() instead of kmem_cache_init_late().
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
mm/slub.c | 150 ++++++++++++++++----------------------------------------------
1 file changed, 39 insertions(+), 111 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-18 09:40:14.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-18 09:41:00.000000000 -0500
@@ -2064,7 +2064,7 @@ init_kmem_cache_node(struct kmem_cache_n
static DEFINE_PER_CPU(struct kmem_cache_cpu, kmalloc_percpu[KMALLOC_CACHES]);
-static inline int alloc_kmem_cache_cpus(struct kmem_cache *s, gfp_t flags)
+static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
{
if (s < kmalloc_caches + KMALLOC_CACHES && s >= kmalloc_caches)
/*
@@ -2091,7 +2091,7 @@ static inline int alloc_kmem_cache_cpus(
* when allocating for the kmalloc_node_cache. This is used for bootstrapping
* memory on a fresh node that has no slab structures yet.
*/
-static void early_kmem_cache_node_alloc(gfp_t gfpflags, int node)
+static void early_kmem_cache_node_alloc(int node)
{
struct page *page;
struct kmem_cache_node *n;
@@ -2099,7 +2099,7 @@ static void early_kmem_cache_node_alloc(
BUG_ON(kmalloc_caches->size < sizeof(struct kmem_cache_node));
- page = new_slab(kmalloc_caches, gfpflags, node);
+ page = new_slab(kmalloc_caches, GFP_NOWAIT, node);
BUG_ON(!page);
if (page_to_nid(page) != node) {
@@ -2143,7 +2143,7 @@ static void free_kmem_cache_nodes(struct
}
}
-static int init_kmem_cache_nodes(struct kmem_cache *s, gfp_t gfpflags)
+static int init_kmem_cache_nodes(struct kmem_cache *s)
{
int node;
@@ -2151,11 +2151,11 @@ static int init_kmem_cache_nodes(struct
struct kmem_cache_node *n;
if (slab_state == DOWN) {
- early_kmem_cache_node_alloc(gfpflags, node);
+ early_kmem_cache_node_alloc(node);
continue;
}
n = kmem_cache_alloc_node(kmalloc_caches,
- gfpflags, node);
+ GFP_KERNEL, node);
if (!n) {
free_kmem_cache_nodes(s);
@@ -2172,7 +2172,7 @@ static void free_kmem_cache_nodes(struct
{
}
-static int init_kmem_cache_nodes(struct kmem_cache *s, gfp_t gfpflags)
+static int init_kmem_cache_nodes(struct kmem_cache *s)
{
init_kmem_cache_node(&s->local_node, s);
return 1;
@@ -2312,7 +2312,7 @@ static int calculate_sizes(struct kmem_c
}
-static int kmem_cache_open(struct kmem_cache *s, gfp_t gfpflags,
+static int kmem_cache_open(struct kmem_cache *s,
const char *name, size_t size,
size_t align, unsigned long flags,
void (*ctor)(void *))
@@ -2348,10 +2348,10 @@ static int kmem_cache_open(struct kmem_c
#ifdef CONFIG_NUMA
s->remote_node_defrag_ratio = 1000;
#endif
- if (!init_kmem_cache_nodes(s, gfpflags & ~SLUB_DMA))
+ if (!init_kmem_cache_nodes(s))
goto error;
- if (alloc_kmem_cache_cpus(s, gfpflags & ~SLUB_DMA))
+ if (alloc_kmem_cache_cpus(s))
return 1;
free_kmem_cache_nodes(s);
@@ -2510,6 +2510,10 @@ EXPORT_SYMBOL(kmem_cache_destroy);
struct kmem_cache kmalloc_caches[KMALLOC_CACHES] __cacheline_aligned;
EXPORT_SYMBOL(kmalloc_caches);
+#ifdef CONFIG_ZONE_DMA
+static struct kmem_cache kmalloc_dma_caches[SLUB_PAGE_SHIFT];
+#endif
+
static int __init setup_slub_min_order(char *str)
{
get_option(&str, &slub_min_order);
@@ -2546,116 +2550,26 @@ static int __init setup_slub_nomerge(cha
__setup("slub_nomerge", setup_slub_nomerge);
-static struct kmem_cache *create_kmalloc_cache(struct kmem_cache *s,
- const char *name, int size, gfp_t gfp_flags)
+static void create_kmalloc_cache(struct kmem_cache *s,
+ const char *name, int size, unsigned int flags)
{
- unsigned int flags = 0;
-
- if (gfp_flags & SLUB_DMA)
- flags = SLAB_CACHE_DMA;
-
/*
* This function is called with IRQs disabled during early-boot on
* single CPU so there's no need to take slub_lock here.
*/
- if (!kmem_cache_open(s, gfp_flags, name, size, ARCH_KMALLOC_MINALIGN,
+ if (!kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
flags, NULL))
goto panic;
list_add(&s->list, &slab_caches);
- if (sysfs_slab_add(s))
- goto panic;
- return s;
+ if (!sysfs_slab_add(s))
+ return;
panic:
panic("Creation of kmalloc slab %s size=%d failed.\n", name, size);
}
-#ifdef CONFIG_ZONE_DMA
-static struct kmem_cache *kmalloc_caches_dma[SLUB_PAGE_SHIFT];
-
-static void sysfs_add_func(struct work_struct *w)
-{
- struct kmem_cache *s;
-
- down_write(&slub_lock);
- list_for_each_entry(s, &slab_caches, list) {
- if (s->flags & __SYSFS_ADD_DEFERRED) {
- s->flags &= ~__SYSFS_ADD_DEFERRED;
- sysfs_slab_add(s);
- }
- }
- up_write(&slub_lock);
-}
-
-static DECLARE_WORK(sysfs_add_work, sysfs_add_func);
-
-static noinline struct kmem_cache *dma_kmalloc_cache(int index, gfp_t flags)
-{
- struct kmem_cache *s;
- char *text;
- size_t realsize;
- unsigned long slabflags;
- int i;
-
- s = kmalloc_caches_dma[index];
- if (s)
- return s;
-
- /* Dynamically create dma cache */
- if (flags & __GFP_WAIT)
- down_write(&slub_lock);
- else {
- if (!down_write_trylock(&slub_lock))
- goto out;
- }
-
- if (kmalloc_caches_dma[index])
- goto unlock_out;
-
- realsize = kmalloc_caches[index].objsize;
- text = kasprintf(flags & ~SLUB_DMA, "kmalloc_dma-%d",
- (unsigned int)realsize);
-
- s = NULL;
- for (i = 0; i < KMALLOC_CACHES; i++)
- if (!kmalloc_caches[i].size)
- break;
-
- BUG_ON(i >= KMALLOC_CACHES);
- s = kmalloc_caches + i;
-
- /*
- * Must defer sysfs creation to a workqueue because we don't know
- * what context we are called from. Before sysfs comes up, we don't
- * need to do anything because our sysfs initcall will start by
- * adding all existing slabs to sysfs.
- */
- slabflags = SLAB_CACHE_DMA|SLAB_NOTRACK;
- if (slab_state >= SYSFS)
- slabflags |= __SYSFS_ADD_DEFERRED;
-
- if (!text || !kmem_cache_open(s, flags, text,
- realsize, ARCH_KMALLOC_MINALIGN, slabflags, NULL)) {
- s->size = 0;
- kfree(text);
- goto unlock_out;
- }
-
- list_add(&s->list, &slab_caches);
- kmalloc_caches_dma[index] = s;
-
- if (slab_state >= SYSFS)
- schedule_work(&sysfs_add_work);
-
-unlock_out:
- up_write(&slub_lock);
-out:
- return kmalloc_caches_dma[index];
-}
-#endif
-
/*
* Conversion table for small slabs sizes / 8 to the index in the
* kmalloc array. This is necessary for slabs < 192 since we have non power
@@ -2708,7 +2622,7 @@ static struct kmem_cache *get_slab(size_
#ifdef CONFIG_ZONE_DMA
if (unlikely((flags & SLUB_DMA)))
- return dma_kmalloc_cache(index, flags);
+ return &kmalloc_dma_caches[index];
#endif
return &kmalloc_caches[index];
@@ -3047,7 +2961,7 @@ void __init kmem_cache_init(void)
* kmem_cache_open for slab_state == DOWN.
*/
create_kmalloc_cache(&kmalloc_caches[0], "kmem_cache_node",
- sizeof(struct kmem_cache_node), GFP_NOWAIT);
+ sizeof(struct kmem_cache_node), 0);
kmalloc_caches[0].refcount = -1;
caches++;
@@ -3060,18 +2974,18 @@ void __init kmem_cache_init(void)
/* Caches that are not of the two-to-the-power-of size */
if (KMALLOC_MIN_SIZE <= 32) {
create_kmalloc_cache(&kmalloc_caches[1],
- "kmalloc-96", 96, GFP_NOWAIT);
+ "kmalloc-96", 96, 0);
caches++;
}
if (KMALLOC_MIN_SIZE <= 64) {
create_kmalloc_cache(&kmalloc_caches[2],
- "kmalloc-192", 192, GFP_NOWAIT);
+ "kmalloc-192", 192, 0);
caches++;
}
for (i = KMALLOC_SHIFT_LOW; i < SLUB_PAGE_SHIFT; i++) {
create_kmalloc_cache(&kmalloc_caches[i],
- "kmalloc", 1 << i, GFP_NOWAIT);
+ "kmalloc", 1 << i, 0);
caches++;
}
@@ -3134,6 +3048,20 @@ void __init kmem_cache_init(void)
kmem_size = sizeof(struct kmem_cache);
#endif
+#ifdef CONFIG_ZONE_DMA
+ for (i = 1; i < SLUB_PAGE_SHIFT; i++) {
+ struct kmem_cache *s = &kmalloc_caches[i];
+
+ if (s->size) {
+ char *name = kasprintf(GFP_NOWAIT,
+ "dma-kmalloc-%d", s->objsize);
+
+ BUG_ON(!name);
+ create_kmalloc_cache(&kmalloc_dma_caches[i],
+ name, s->objsize, SLAB_CACHE_DMA);
+ }
+ }
+#endif
printk(KERN_INFO
"SLUB: Genslabs=%d, HWalign=%d, Order=%d-%d, MinObjects=%d,"
" CPUs=%d, Nodes=%d\n",
@@ -3236,7 +3164,7 @@ struct kmem_cache *kmem_cache_create(con
s = kmalloc(kmem_size, GFP_KERNEL);
if (s) {
- if (kmem_cache_open(s, GFP_KERNEL, name,
+ if (kmem_cache_open(s, name,
size, align, flags, ctor)) {
list_add(&s->list, &slab_caches);
if (sysfs_slab_add(s)) {
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [S+Q Cleanup4 3/6] slub: Remove static kmem_cache_cpu array for boot
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 1/6] Slub: Force no inlining of debug functions Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 2/6] slub: remove dynamic dma slab allocation Christoph Lameter
@ 2010-08-20 17:37 ` Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 4/6] slub: Dynamically size kmalloc cache allocations Christoph Lameter
` (4 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, Tejun Heo, David Rientjes
[-- Attachment #1: maybe_remove_static --]
[-- Type: text/plain, Size: 1586 bytes --]
The percpu allocator can now handle allocations during early boot.
So drop the static kmem_cache_cpu array.
Cc: Tejun Heo <tj@kernel.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
mm/slub.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-18 09:41:00.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-18 09:55:20.000000000 -0500
@@ -2062,23 +2062,14 @@ init_kmem_cache_node(struct kmem_cache_n
#endif
}
-static DEFINE_PER_CPU(struct kmem_cache_cpu, kmalloc_percpu[KMALLOC_CACHES]);
-
static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
{
- if (s < kmalloc_caches + KMALLOC_CACHES && s >= kmalloc_caches)
- /*
- * Boot time creation of the kmalloc array. Use static per cpu data
- * since the per cpu allocator is not available yet.
- */
- s->cpu_slab = kmalloc_percpu + (s - kmalloc_caches);
- else
- s->cpu_slab = alloc_percpu(struct kmem_cache_cpu);
+ BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
+ SLUB_PAGE_SHIFT * sizeof(struct kmem_cache_cpu));
- if (!s->cpu_slab)
- return 0;
+ s->cpu_slab = alloc_percpu(struct kmem_cache_cpu);
- return 1;
+ return s->cpu_slab != NULL;
}
#ifdef CONFIG_NUMA
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [S+Q Cleanup4 4/6] slub: Dynamically size kmalloc cache allocations
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
` (2 preceding siblings ...)
2010-08-20 17:37 ` [S+Q Cleanup4 3/6] slub: Remove static kmem_cache_cpu array for boot Christoph Lameter
@ 2010-08-20 17:37 ` Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 5/6] slub: Extract hooks for memory checkers from hotpaths Christoph Lameter
` (3 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, David Rientjes
[-- Attachment #1: slub_dynamic_kmem_alloc --]
[-- Type: text/plain, Size: 12219 bytes --]
kmalloc caches are statically defined and may take up a lot of space just
because the sizes of the node array has to be dimensioned for the largest
node count supported.
This patch makes the size of the kmem_cache structure dynamic throughout by
creating a kmem_cache slab cache for the kmem_cache objects. The bootstrap
occurs by allocating the initial one or two kmem_cache objects from the
page allocator.
C2->C3
- Fix various issues indicated by David
- Make create kmalloc_cache return a kmem_cache * pointer.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
include/linux/slub_def.h | 7 -
mm/slub.c | 191 +++++++++++++++++++++++++++++++++--------------
2 files changed, 140 insertions(+), 58 deletions(-)
Index: linux-2.6/include/linux/slub_def.h
===================================================================
--- linux-2.6.orig/include/linux/slub_def.h 2010-08-19 15:30:54.000000000 -0500
+++ linux-2.6/include/linux/slub_def.h 2010-08-19 16:31:52.000000000 -0500
@@ -139,19 +139,16 @@ struct kmem_cache {
#ifdef CONFIG_ZONE_DMA
#define SLUB_DMA __GFP_DMA
-/* Reserve extra caches for potential DMA use */
-#define KMALLOC_CACHES (2 * SLUB_PAGE_SHIFT)
#else
/* Disable DMA functionality */
#define SLUB_DMA (__force gfp_t)0
-#define KMALLOC_CACHES SLUB_PAGE_SHIFT
#endif
/*
* We keep the general caches in an array of slab caches that are used for
* 2^x bytes of allocations.
*/
-extern struct kmem_cache kmalloc_caches[KMALLOC_CACHES];
+extern struct kmem_cache *kmalloc_caches[SLUB_PAGE_SHIFT];
/*
* Sorry that the following has to be that ugly but some versions of GCC
@@ -216,7 +213,7 @@ static __always_inline struct kmem_cache
if (index == 0)
return NULL;
- return &kmalloc_caches[index];
+ return kmalloc_caches[index];
}
void *kmem_cache_alloc(struct kmem_cache *, gfp_t);
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-19 15:36:35.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-19 16:32:55.000000000 -0500
@@ -168,7 +168,6 @@ static inline int kmem_cache_debug(struc
/* Internal SLUB flags */
#define __OBJECT_POISON 0x80000000UL /* Poison object */
-#define __SYSFS_ADD_DEFERRED 0x40000000UL /* Not yet visible via sysfs */
static int kmem_size = sizeof(struct kmem_cache);
@@ -178,7 +177,7 @@ static struct notifier_block slab_notifi
static enum {
DOWN, /* No slab functionality available */
- PARTIAL, /* kmem_cache_open() works but kmalloc does not */
+ PARTIAL, /* Kmem_cache_node works */
UP, /* Everything works but does not show up in sysfs */
SYSFS /* Sysfs up */
} slab_state = DOWN;
@@ -2073,6 +2072,8 @@ static inline int alloc_kmem_cache_cpus(
}
#ifdef CONFIG_NUMA
+static struct kmem_cache *kmem_cache_node;
+
/*
* No kmalloc_node yet so do it by hand. We know that this is the first
* slab on the node for this slabcache. There are no concurrent accesses
@@ -2088,9 +2089,9 @@ static void early_kmem_cache_node_alloc(
struct kmem_cache_node *n;
unsigned long flags;
- BUG_ON(kmalloc_caches->size < sizeof(struct kmem_cache_node));
+ BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node));
- page = new_slab(kmalloc_caches, GFP_NOWAIT, node);
+ page = new_slab(kmem_cache_node, GFP_NOWAIT, node);
BUG_ON(!page);
if (page_to_nid(page) != node) {
@@ -2102,15 +2103,15 @@ static void early_kmem_cache_node_alloc(
n = page->freelist;
BUG_ON(!n);
- page->freelist = get_freepointer(kmalloc_caches, n);
+ page->freelist = get_freepointer(kmem_cache_node, n);
page->inuse++;
- kmalloc_caches->node[node] = n;
+ kmem_cache_node->node[node] = n;
#ifdef CONFIG_SLUB_DEBUG
- init_object(kmalloc_caches, n, 1);
- init_tracking(kmalloc_caches, n);
+ init_object(kmem_cache_node, n, 1);
+ init_tracking(kmem_cache_node, n);
#endif
- init_kmem_cache_node(n, kmalloc_caches);
- inc_slabs_node(kmalloc_caches, node, page->objects);
+ init_kmem_cache_node(n, kmem_cache_node);
+ inc_slabs_node(kmem_cache_node, node, page->objects);
/*
* lockdep requires consistent irq usage for each lock
@@ -2128,8 +2129,10 @@ static void free_kmem_cache_nodes(struct
for_each_node_state(node, N_NORMAL_MEMORY) {
struct kmem_cache_node *n = s->node[node];
+
if (n)
- kmem_cache_free(kmalloc_caches, n);
+ kmem_cache_free(kmem_cache_node, n);
+
s->node[node] = NULL;
}
}
@@ -2145,7 +2148,7 @@ static int init_kmem_cache_nodes(struct
early_kmem_cache_node_alloc(node);
continue;
}
- n = kmem_cache_alloc_node(kmalloc_caches,
+ n = kmem_cache_alloc_node(kmem_cache_node,
GFP_KERNEL, node);
if (!n) {
@@ -2498,11 +2501,13 @@ EXPORT_SYMBOL(kmem_cache_destroy);
* Kmalloc subsystem
*******************************************************************/
-struct kmem_cache kmalloc_caches[KMALLOC_CACHES] __cacheline_aligned;
+struct kmem_cache *kmalloc_caches[SLUB_PAGE_SHIFT];
EXPORT_SYMBOL(kmalloc_caches);
+static struct kmem_cache *kmem_cache;
+
#ifdef CONFIG_ZONE_DMA
-static struct kmem_cache kmalloc_dma_caches[SLUB_PAGE_SHIFT];
+static struct kmem_cache *kmalloc_dma_caches[SLUB_PAGE_SHIFT];
#endif
static int __init setup_slub_min_order(char *str)
@@ -2541,9 +2546,13 @@ static int __init setup_slub_nomerge(cha
__setup("slub_nomerge", setup_slub_nomerge);
-static void create_kmalloc_cache(struct kmem_cache *s,
- const char *name, int size, unsigned int flags)
+static struct kmem_cache *__init create_kmalloc_cache(const char *name,
+ int size, unsigned int flags)
{
+ struct kmem_cache *s;
+
+ s = kmem_cache_alloc(kmem_cache, GFP_NOWAIT);
+
/*
* This function is called with IRQs disabled during early-boot on
* single CPU so there's no need to take slub_lock here.
@@ -2553,12 +2562,11 @@ static void create_kmalloc_cache(struct
goto panic;
list_add(&s->list, &slab_caches);
-
- if (!sysfs_slab_add(s))
- return;
+ return s;
panic:
panic("Creation of kmalloc slab %s size=%d failed.\n", name, size);
+ return NULL;
}
/*
@@ -2613,10 +2621,10 @@ static struct kmem_cache *get_slab(size_
#ifdef CONFIG_ZONE_DMA
if (unlikely((flags & SLUB_DMA)))
- return &kmalloc_dma_caches[index];
+ return kmalloc_dma_caches[index];
#endif
- return &kmalloc_caches[index];
+ return kmalloc_caches[index];
}
void *__kmalloc(size_t size, gfp_t flags)
@@ -2940,46 +2948,113 @@ static int slab_memory_callback(struct n
* Basic setup of slabs
*******************************************************************/
+/*
+ * Used for early kmem_cache structures that were allocated using
+ * the page allocator
+ */
+
+static void __init kmem_cache_bootstrap_fixup(struct kmem_cache *s)
+{
+ int node;
+
+ list_add(&s->list, &slab_caches);
+ s->refcount = -1;
+
+ for_each_node_state(node, N_NORMAL_MEMORY) {
+ struct kmem_cache_node *n = get_node(s, node);
+ struct page *p;
+
+ if (n) {
+ list_for_each_entry(p, &n->partial, lru)
+ p->slab = s;
+
+#ifdef CONFIG_SLAB_DEBUG
+ list_for_each_entry(p, &n->full, lru)
+ p->slab = s;
+#endif
+ }
+ }
+}
+
void __init kmem_cache_init(void)
{
int i;
int caches = 0;
+ struct kmem_cache *temp_kmem_cache;
+ int order;
#ifdef CONFIG_NUMA
+ struct kmem_cache *temp_kmem_cache_node;
+ unsigned long kmalloc_size;
+
+ kmem_size = offsetof(struct kmem_cache, node) +
+ nr_node_ids * sizeof(struct kmem_cache_node *);
+
+ /* Allocate two kmem_caches from the page allocator */
+ kmalloc_size = ALIGN(kmem_size, cache_line_size());
+ order = get_order(2 * kmalloc_size);
+ kmem_cache = (void *)__get_free_pages(GFP_NOWAIT, order);
+
/*
* Must first have the slab cache available for the allocations of the
* struct kmem_cache_node's. There is special bootstrap code in
* kmem_cache_open for slab_state == DOWN.
*/
- create_kmalloc_cache(&kmalloc_caches[0], "kmem_cache_node",
- sizeof(struct kmem_cache_node), 0);
- kmalloc_caches[0].refcount = -1;
- caches++;
+ kmem_cache_node = (void *)kmem_cache + kmalloc_size;
+
+ kmem_cache_open(kmem_cache_node, "kmem_cache_node",
+ sizeof(struct kmem_cache_node),
+ 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
+#else
+ /* Allocate a single kmem_cache from the page allocator */
+ kmem_size = sizeof(struct kmem_cache);
+ order = get_order(kmem_size);
+ kmem_cache = (void *)__get_free_pages(GFP_NOWAIT, order);
#endif
/* Able to allocate the per node structures */
slab_state = PARTIAL;
- /* Caches that are not of the two-to-the-power-of size */
- if (KMALLOC_MIN_SIZE <= 32) {
- create_kmalloc_cache(&kmalloc_caches[1],
- "kmalloc-96", 96, 0);
- caches++;
- }
- if (KMALLOC_MIN_SIZE <= 64) {
- create_kmalloc_cache(&kmalloc_caches[2],
- "kmalloc-192", 192, 0);
- caches++;
- }
+ temp_kmem_cache = kmem_cache;
+ kmem_cache_open(kmem_cache, "kmem_cache", kmem_size,
+ 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
+ kmem_cache = kmem_cache_alloc(kmem_cache, GFP_NOWAIT);
+ memcpy(kmem_cache, temp_kmem_cache, kmem_size);
- for (i = KMALLOC_SHIFT_LOW; i < SLUB_PAGE_SHIFT; i++) {
- create_kmalloc_cache(&kmalloc_caches[i],
- "kmalloc", 1 << i, 0);
- caches++;
- }
+#ifdef CONFIG_NUMA
+ /*
+ * Allocate kmem_cache_node properly from the kmem_cache slab.
+ * kmem_cache_node is separately allocated so no need to
+ * update any list pointers.
+ */
+ temp_kmem_cache_node = kmem_cache_node;
+ kmem_cache_node = kmem_cache_alloc(kmem_cache, GFP_NOWAIT);
+ memcpy(kmem_cache_node, temp_kmem_cache_node, kmem_size);
+
+ kmem_cache_bootstrap_fixup(kmem_cache_node);
+
+ caches++;
+#else
+ /*
+ * kmem_cache has kmem_cache_node embedded and we moved it!
+ * Update the list heads
+ */
+ INIT_LIST_HEAD(&kmem_cache->local_node.partial);
+ list_splice(&temp_kmem_cache->local_node.partial, &kmem_cache->local_node.partial);
+#ifdef CONFIG_SLUB_DEBUG
+ INIT_LIST_HEAD(&kmem_cache->local_node.full);
+ list_splice(&temp_kmem_cache->local_node.full, &kmem_cache->local_node.full);
+#endif
+#endif
+ kmem_cache_bootstrap_fixup(kmem_cache);
+ caches++;
+ /* Free temporary boot structure */
+ free_pages((unsigned long)temp_kmem_cache, order);
+
+ /* Now we can use the kmem_cache to allocate kmalloc slabs */
/*
* Patch up the size_index table if we have strange large alignment
@@ -3019,6 +3094,22 @@ void __init kmem_cache_init(void)
size_index[size_index_elem(i)] = 8;
}
+ /* Caches that are not of the two-to-the-power-of size */
+ if (KMALLOC_MIN_SIZE <= 32) {
+ kmalloc_caches[1] = create_kmalloc_cache("kmalloc-96", 96, 0);
+ caches++;
+ }
+
+ if (KMALLOC_MIN_SIZE <= 64) {
+ kmalloc_caches[2] = create_kmalloc_cache("kmalloc-192", 192, 0);
+ caches++;
+ }
+
+ for (i = KMALLOC_SHIFT_LOW; i < SLUB_PAGE_SHIFT; i++) {
+ kmalloc_caches[i] = create_kmalloc_cache("kmalloc", 1 << i, 0);
+ caches++;
+ }
+
slab_state = UP;
/* Provide the correct kmalloc names now that the caches are up */
@@ -3026,30 +3117,24 @@ void __init kmem_cache_init(void)
char *s = kasprintf(GFP_NOWAIT, "kmalloc-%d", 1 << i);
BUG_ON(!s);
- kmalloc_caches[i].name = s;
+ kmalloc_caches[i]->name = s;
}
#ifdef CONFIG_SMP
register_cpu_notifier(&slab_notifier);
#endif
-#ifdef CONFIG_NUMA
- kmem_size = offsetof(struct kmem_cache, node) +
- nr_node_ids * sizeof(struct kmem_cache_node *);
-#else
- kmem_size = sizeof(struct kmem_cache);
-#endif
#ifdef CONFIG_ZONE_DMA
- for (i = 1; i < SLUB_PAGE_SHIFT; i++) {
- struct kmem_cache *s = &kmalloc_caches[i];
+ for (i = 0; i < SLUB_PAGE_SHIFT; i++) {
+ struct kmem_cache *s = kmalloc_caches[i];
- if (s->size) {
+ if (s && s->size) {
char *name = kasprintf(GFP_NOWAIT,
"dma-kmalloc-%d", s->objsize);
BUG_ON(!name);
- create_kmalloc_cache(&kmalloc_dma_caches[i],
- name, s->objsize, SLAB_CACHE_DMA);
+ kmalloc_dma_caches[i] = create_kmalloc_cache(name,
+ s->objsize, SLAB_CACHE_DMA);
}
}
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [S+Q Cleanup4 5/6] slub: Extract hooks for memory checkers from hotpaths
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
` (3 preceding siblings ...)
2010-08-20 17:37 ` [S+Q Cleanup4 4/6] slub: Dynamically size kmalloc cache allocations Christoph Lameter
@ 2010-08-20 17:37 ` Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 6/6] slub: Move gfpflag masking out of the hotpath Christoph Lameter
` (2 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, David Rientjes
[-- Attachment #1: slub_extract --]
[-- Type: text/plain, Size: 3210 bytes --]
Extract the code that memory checkers and other verification tools use from
the hotpaths. Makes it easier to add new ones and reduces the disturbances
of the hotpaths.
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
mm/slub.c | 49 ++++++++++++++++++++++++++++++++++++++-----------
1 file changed, 38 insertions(+), 11 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-19 16:32:55.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-19 16:34:07.000000000 -0500
@@ -791,6 +791,37 @@ static void trace(struct kmem_cache *s,
}
/*
+ * Hooks for other subsystems that check memory allocations. In a typical
+ * production configuration these hooks all should produce no code at all.
+ */
+static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
+{
+ lockdep_trace_alloc(flags);
+ might_sleep_if(flags & __GFP_WAIT);
+
+ return should_failslab(s->objsize, flags, s->flags);
+}
+
+static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, void *object)
+{
+ kmemcheck_slab_alloc(s, flags, object, s->objsize);
+ kmemleak_alloc_recursive(object, s->objsize, 1, s->flags, flags);
+}
+
+static inline void slab_free_hook(struct kmem_cache *s, void *x)
+{
+ kmemleak_free_recursive(x, s->flags);
+}
+
+static inline void slab_free_hook_irq(struct kmem_cache *s, void *object)
+{
+ kmemcheck_slab_free(s, object, s->objsize);
+ debug_check_no_locks_freed(object, s->objsize);
+ if (!(s->flags & SLAB_DEBUG_OBJECTS))
+ debug_check_no_obj_freed(object, s->objsize);
+}
+
+/*
* Tracking of fully allocated slabs for debugging purposes.
*/
static void add_full(struct kmem_cache_node *n, struct page *page)
@@ -1696,10 +1727,7 @@ static __always_inline void *slab_alloc(
gfpflags &= gfp_allowed_mask;
- lockdep_trace_alloc(gfpflags);
- might_sleep_if(gfpflags & __GFP_WAIT);
-
- if (should_failslab(s->objsize, gfpflags, s->flags))
+ if (slab_pre_alloc_hook(s, gfpflags))
return NULL;
local_irq_save(flags);
@@ -1718,8 +1746,7 @@ static __always_inline void *slab_alloc(
if (unlikely(gfpflags & __GFP_ZERO) && object)
memset(object, 0, s->objsize);
- kmemcheck_slab_alloc(s, gfpflags, object, s->objsize);
- kmemleak_alloc_recursive(object, s->objsize, 1, s->flags, gfpflags);
+ slab_post_alloc_hook(s, gfpflags, object);
return object;
}
@@ -1849,13 +1876,13 @@ static __always_inline void slab_free(st
struct kmem_cache_cpu *c;
unsigned long flags;
- kmemleak_free_recursive(x, s->flags);
+ slab_free_hook(s, x);
+
local_irq_save(flags);
c = __this_cpu_ptr(s->cpu_slab);
- kmemcheck_slab_free(s, object, s->objsize);
- debug_check_no_locks_freed(object, s->objsize);
- if (!(s->flags & SLAB_DEBUG_OBJECTS))
- debug_check_no_obj_freed(object, s->objsize);
+
+ slab_free_hook_irq(s, x);
+
if (likely(page == c->page && c->node >= 0)) {
set_freepointer(s, object, c->freelist);
c->freelist = object;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [S+Q Cleanup4 6/6] slub: Move gfpflag masking out of the hotpath
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
` (4 preceding siblings ...)
2010-08-20 17:37 ` [S+Q Cleanup4 5/6] slub: Extract hooks for memory checkers from hotpaths Christoph Lameter
@ 2010-08-20 17:37 ` Christoph Lameter
2010-08-20 21:06 ` [S+Q Cleanup4 0/6] SLUB: Cleanups V4 David Rientjes
2010-08-23 17:17 ` Pekka Enberg
7 siblings, 0 replies; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 17:37 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm, David Rientjes
[-- Attachment #1: slub_move_gfpflags --]
[-- Type: text/plain, Size: 1836 bytes --]
Move the gfpflags masking into the hooks for checkers and into the slowpaths.
gfpflag masking requires access to a global variable and thus adds an
additional cacheline reference to the hotpaths.
If no hooks are active then the gfpflag masking will result in
code that the compiler can toss out.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
mm/slub.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2010-08-20 11:43:50.000000000 -0500
+++ linux-2.6/mm/slub.c 2010-08-20 11:43:50.000000000 -0500
@@ -796,6 +796,7 @@ static void trace(struct kmem_cache *s,
*/
static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
{
+ flags &= gfp_allowed_mask;
lockdep_trace_alloc(flags);
might_sleep_if(flags & __GFP_WAIT);
@@ -804,6 +805,7 @@ static inline int slab_pre_alloc_hook(st
static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, void *object)
{
+ flags &= gfp_allowed_mask;
kmemcheck_slab_alloc(s, flags, object, s->objsize);
kmemleak_alloc_recursive(object, s->objsize, 1, s->flags, flags);
}
@@ -1677,6 +1679,7 @@ new_slab:
goto load_freelist;
}
+ gfpflags &= gfp_allowed_mask;
if (gfpflags & __GFP_WAIT)
local_irq_enable();
@@ -1725,8 +1728,6 @@ static __always_inline void *slab_alloc(
struct kmem_cache_cpu *c;
unsigned long flags;
- gfpflags &= gfp_allowed_mask;
-
if (slab_pre_alloc_hook(s, gfpflags))
return NULL;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [S+Q Cleanup4 0/6] SLUB: Cleanups V4
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
` (5 preceding siblings ...)
2010-08-20 17:37 ` [S+Q Cleanup4 6/6] slub: Move gfpflag masking out of the hotpath Christoph Lameter
@ 2010-08-20 21:06 ` David Rientjes
2010-08-20 23:12 ` Christoph Lameter
2010-08-23 17:17 ` Pekka Enberg
7 siblings, 1 reply; 11+ messages in thread
From: David Rientjes @ 2010-08-20 21:06 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm
On Fri, 20 Aug 2010, Christoph Lameter wrote:
> Patch 3
>
> Remove static allocation of kmem_cache_cpu array and rely on the
> percpu allocator to allocate memory for the array on bootup.
>
I don't see this patch in the v4 posting of your series.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [S+Q Cleanup4 0/6] SLUB: Cleanups V4
2010-08-20 21:06 ` [S+Q Cleanup4 0/6] SLUB: Cleanups V4 David Rientjes
@ 2010-08-20 23:12 ` Christoph Lameter
2010-08-20 23:53 ` David Rientjes
0 siblings, 1 reply; 11+ messages in thread
From: Christoph Lameter @ 2010-08-20 23:12 UTC (permalink / raw)
To: David Rientjes; +Cc: Pekka Enberg, linux-mm
On Fri, 20 Aug 2010, David Rientjes wrote:
> > Remove static allocation of kmem_cache_cpu array and rely on the
> > percpu allocator to allocate memory for the array on bootup.
> >
>
> I don't see this patch in the v4 posting of your series.
I see it on the list. So I guess just wait until it reaches you.
Return-path: <owner-linux-mm@kvack.org>
Envelope-to: cl@localhost
Delivery-date: Fri, 20 Aug 2010 18:06:17 -0500
Received: from localhost ([127.0.0.1] helo=router.home)
by router.home with esmtp (Exim 4.71)
(envelope-from <owner-linux-mm@kvack.org>)
id 1OmafA-0002Aj-Td
for cl@localhost; Fri, 20 Aug 2010 18:06:17 -0500
Received: from imap1.linux-foundation.org [140.211.169.55]
by router.home with IMAP (fetchmail-6.3.9-rc2)
for <cl@localhost> (single-drop); Fri, 20 Aug 2010 18:06:16 -0500
(CDT)
Received: from smtp1.linux-foundation.org (smtp1.linux-foundation.org
[140.211.169.13])
by imap1.linux-foundation.org
(8.13.5.20060308/8.13.5/Debian-3ubuntu1.1)
with ESMTP id o7KN2jWM011206;
Fri, 20 Aug 2010 16:02:45 -0700
Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17])
by smtp1.linux-foundation.org (8.14.2/8.13.5/Debian-3ubuntu1.1) with
ESMTP id o7KN29PM008217;
Fri, 20 Aug 2010 16:02:10 -0700
Received: by kanga.kvack.org (Postfix)
id BA5636006BA; Fri, 20 Aug 2010 19:02:06 -0400 (EDT)
Delivered-To: linux-mm-outgoing@kvack.org
Received: by kanga.kvack.org (Postfix, from userid 0)
id B872D6004CE; Fri, 20 Aug 2010 19:02:06 -0400 (EDT)
X-Original-To: int-list-linux-mm@kvack.org
Delivered-To: int-list-linux-mm@kvack.org
Received: by kanga.kvack.org (Postfix, from userid 63042)
id 8D1896004CE; Fri, 20 Aug 2010 19:02:06 -0400 (EDT)
X-Original-To: linux-mm@kvack.org
Delivered-To: linux-mm@kvack.org
Received: from mail203.messagelabs.com (mail203.messagelabs.com
[216.82.254.243])
by kanga.kvack.org (Postfix) with SMTP id 1E8B96004CE
for <linux-mm@kvack.org>; Fri, 20 Aug 2010 19:02:06 -0400 (EDT)
X-VirusChecked: Checked
X-Env-Sender: cl@linux.com
X-Msg-Ref: server-13.tower-203.messagelabs.com!1282345324!71922773!1
X-StarScan-Version: 6.2.4; banners=-,-,-
X-Originating-IP: [76.13.13.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
Received: (qmail 24791 invoked from network); 20 Aug 2010 23:02:05 -0000
Received: from smtp106.prem.mail.ac4.yahoo.com (HELO
smtp106.prem.mail.ac4.yahoo.com) (76.13.13.45)
by server-13.tower-203.messagelabs.com with SMTP; 20 Aug 2010 23:02:05
-0000
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [S+Q Cleanup4 0/6] SLUB: Cleanups V4
2010-08-20 23:12 ` Christoph Lameter
@ 2010-08-20 23:53 ` David Rientjes
0 siblings, 0 replies; 11+ messages in thread
From: David Rientjes @ 2010-08-20 23:53 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm
On Fri, 20 Aug 2010, Christoph Lameter wrote:
> > > Remove static allocation of kmem_cache_cpu array and rely on the
> > > percpu allocator to allocate memory for the array on bootup.
> > >
> >
> > I don't see this patch in the v4 posting of your series.
>
> I see it on the list. So I guess just wait until it reaches you.
>
Ah, it finally hit me and marc.info, thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [S+Q Cleanup4 0/6] SLUB: Cleanups V4
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
` (6 preceding siblings ...)
2010-08-20 21:06 ` [S+Q Cleanup4 0/6] SLUB: Cleanups V4 David Rientjes
@ 2010-08-23 17:17 ` Pekka Enberg
7 siblings, 0 replies; 11+ messages in thread
From: Pekka Enberg @ 2010-08-23 17:17 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm, David Rientjes, LKML
On 20.8.2010 20.37, Christoph Lameter wrote:
> I think it may be best to first try to merge these and make sure that
> they are fine before we go step by step through the unification patches.
> I hope they can go into -next.
I've applied these patches and queued them for -next. Thanks guys!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2010-08-23 17:17 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-20 17:37 [S+Q Cleanup4 0/6] SLUB: Cleanups V4 Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 1/6] Slub: Force no inlining of debug functions Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 2/6] slub: remove dynamic dma slab allocation Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 3/6] slub: Remove static kmem_cache_cpu array for boot Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 4/6] slub: Dynamically size kmalloc cache allocations Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 5/6] slub: Extract hooks for memory checkers from hotpaths Christoph Lameter
2010-08-20 17:37 ` [S+Q Cleanup4 6/6] slub: Move gfpflag masking out of the hotpath Christoph Lameter
2010-08-20 21:06 ` [S+Q Cleanup4 0/6] SLUB: Cleanups V4 David Rientjes
2010-08-20 23:12 ` Christoph Lameter
2010-08-20 23:53 ` David Rientjes
2010-08-23 17:17 ` Pekka Enberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).