* Common [00/19] Sl[auo]b: Common code rework V9
@ 2012-08-02 20:15 Christoph Lameter
2012-08-02 20:15 ` Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
` (19 more replies)
0 siblings, 20 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
Note that the first three patches are candidates
for 3.5 since they fix certain things.
The rest could go into -next once we are
through with initial review.
V8->V9:
- Fix numerous things pointed out by Glauber.
- Cleanup the way error handling works in the
common kmem_cache_create() function.
- General cleanup by breaking things up
into multiple patches were necessary.
V7->V8:
- Do not use kfree for kmem_cache in slub.
- Add more patches up to a common
scheme for object alignment.
V6->V7:
- Omit pieces that were merged for 3.6
- Fix issues pointed out by Glauber.
- Include the patches up to the point at which
the slab name handling is unified
V5->V6:
- Patches against Pekka's for-next tree.
- Go slow and cut down to just patches that are safe
(there will likely be some churn already due to the
mutex unification between slabs)
- More to come next week when I have more time (
took me almost the whole week to catch up after
being gone for awhile).
V4->V5
- Rediff against current upstream + Pekka's cleanup branch.
V3->V4:
- Do not use the COMMON macro anymore.
- Fixup various issues
- No general sysfs support yet due to lockdep issues with
keys in kmalloc'ed memory.
V2->V3:
- Incorporate more feedback from Joonsoo Kim and Glauber Costa
- And a couple more patches to deal with slab duping and move
more code to slab_common.c
V1->V2:
- Incorporate glommers feedback.
- Add 2 more patches dealing with common code in kmem_cache_destroy
This is a series of patches that extracts common functionality from
slab allocators into a common code base. The intend is to standardize
as much as possible of the allocator behavior while keeping the
distinctive features of each allocator which are mostly due to their
storage format and serialization approaches.
This patchset makes a beginning by extracting common functionality in
kmem_cache_create() and kmem_cache_destroy(). However, there are
numerous other areas where such work could be beneficial:
1. Extract the sysfs support from SLUB and make it common. That way
all allocators have a common sysfs API and are handleable in the same
way regardless of the allocator chose.
2. Extract the error reporting and checking from SLUB and make
it available for all allocators. This means that all allocators
will gain the resiliency and error handling capabilties.
3. Extract the memory hotplug and cpu hotplug handling. It seems that
SLAB may be more sophisticated here. Having common code here will
make it easier to maintain the special code.
4. Extract the aliasing capability of SLUB. This will enable fast
slab creation without creating too many additional slab caches.
The arrays of caches of varying sizes in numerous subsystems
do not cause the creation of numerous slab caches. Storage
density is increased and the cache footprint is reduced.
Ultimately it is to be hoped that the special code for each allocator
shrinks to a mininum. This will also make it easier to make modification
to allocators.
In the far future one could envision that the current allocators will
just become storage algorithms that can be chosen based on the need of
the subsystem. F.e.
Cpu cache dependend performance = Bonwick allocator (SLAB)
Minimal cycle count and cache footprint = SLUB
Maximum storage density = K&R allocator (SLOB)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free()
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:37 ` David Rientjes
2012-08-02 20:15 ` Common [02/19] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
` (18 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: slub_new_debug --]
[-- Type: text/plain, Size: 1026 bytes --]
Add additional debugging to check that the objects is actually from the cache
the caller claims. Doing so currently trips up some other debugging code. It
takes a lot to infer from that what was happening.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 13:52:35.314898373 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 13:52:38.662958767 -0500
@@ -2607,6 +2607,13 @@
page = virt_to_head_page(x);
+ if (kmem_cache_debug(s) && page->slab != s) {
+ printk("kmem_cache_free: Wrong slab cache. %s but object"
+ " is from %s\n", page->slab->name, s->name);
+ WARN_ON(1);
+ return;
+ }
+
slab_free(s, page, x, _RET_IP_);
trace_kmem_cache_free(_RET_IP_, x);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
2012-08-02 20:15 ` Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:52 ` David Rientjes
2012-08-03 13:20 ` Glauber Costa
2012-08-02 20:15 ` Common [03/19] Rename oops label Christoph Lameter
` (17 subsequent siblings)
19 siblings, 2 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: slub_use_kmem_cache --]
[-- Type: text/plain, Size: 1696 bytes --]
Do not use kmalloc() but kmem_cache_alloc() for the allocation
of the kmem_cache structures in slub.
This is the way its supposed to be. Recent merges lost
the freeing of the kmem_cache structure and so this is also
fixing memory leak on kmem_cache_destroy() by adding
the missing free action to sysfs_slab_remove().
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-01 13:02:18.897656578 -0500
+++ linux-2.6/mm/slub.c 2012-08-01 13:06:02.673597753 -0500
@@ -213,7 +213,7 @@
static inline void sysfs_slab_remove(struct kmem_cache *s)
{
kfree(s->name);
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
#endif
@@ -3962,7 +3962,7 @@
if (!n)
return NULL;
- s = kmalloc(kmem_size, GFP_KERNEL);
+ s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
if (s) {
if (kmem_cache_open(s, n,
size, align, flags, ctor)) {
@@ -3979,7 +3979,7 @@
list_del(&s->list);
kmem_cache_close(s);
}
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
kfree(n);
return NULL;
@@ -5217,7 +5217,7 @@
struct kmem_cache *s = to_slab(kobj);
kfree(s->name);
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
static const struct sysfs_ops slab_sysfs_ops = {
@@ -5342,6 +5342,8 @@
kobject_uevent(&s->kobj, KOBJ_REMOVE);
kobject_del(&s->kobj);
kobject_put(&s->kobj);
+ kfree(s->name);
+ kmem_cache_free(kmem_cache, s);
}
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [03/19] Rename oops label
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
2012-08-02 20:15 ` Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
2012-08-02 20:15 ` Common [02/19] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:45 ` David Rientjes
2012-08-02 20:15 ` Common [04/19] Improve error handling in kmem_cache_create Christoph Lameter
` (16 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: remove_oops --]
[-- Type: text/plain, Size: 875 bytes --]
The label is actually used for successful exits so change the name.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 09:18:07.570384286 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 09:18:53.311194644 -0500
@@ -89,7 +89,7 @@
name);
dump_stack();
s = NULL;
- goto oops;
+ goto out_locked;
}
}
@@ -99,7 +99,7 @@
s = __kmem_cache_create(name, size, align, flags, ctor);
#ifdef CONFIG_DEBUG_VM
-oops:
+out_locked:
#endif
mutex_unlock(&slab_mutex);
put_online_cpus();
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [04/19] Improve error handling in kmem_cache_create
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (2 preceding siblings ...)
2012-08-02 20:15 ` Common [03/19] Rename oops label Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:58 ` David Rientjes
2012-08-02 20:15 ` Common [05/19] Move list_add() to slab_common.c Christoph Lameter
` (15 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: error_handling_in_kmem_cache_create --]
[-- Type: text/plain, Size: 2356 bytes --]
Instead of using s == NULL use an errorcode. This allows much more
detailed diagnostics as to what went wrong. As we add more functionality
from the slab allocators to the common kmem_cache_create() function we will
also add more error conditions.
Print the error code during the panic as well as in a warning if the module
can handle failure. The API for kmem_cache_create() currently does not allow
the returning of an error code. Return NULL but log the cause of the problem
in the syslog.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:00:41.547674458 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:21:07.061676525 -0500
@@ -51,13 +51,13 @@
struct kmem_cache *kmem_cache_create(const char *name, size_t size, size_t align,
unsigned long flags, void (*ctor)(void *))
{
- struct kmem_cache *s = NULL;
+ struct kmem_cache *s;
+ int err = 0;
#ifdef CONFIG_DEBUG_VM
if (!name || in_interrupt() || size < sizeof(void *) ||
size > KMALLOC_MAX_SIZE) {
- printk(KERN_ERR "kmem_cache_create(%s) integrity check"
- " failed\n", name);
+ err = -EINVAL;
goto out;
}
#endif
@@ -84,11 +84,7 @@
}
if (!strcmp(s->name, name)) {
- printk(KERN_ERR "kmem_cache_create(%s): Cache name"
- " already exists.\n",
- name);
- dump_stack();
- s = NULL;
+ err = -EEXIST;
goto out_locked;
}
}
@@ -97,6 +93,8 @@
#endif
s = __kmem_cache_create(name, size, align, flags, ctor);
+ if (!s)
+ err = -ENOSYS; /* Until __kmem_cache_create returns code */
#ifdef CONFIG_DEBUG_VM
out_locked:
@@ -107,8 +105,19 @@
#ifdef CONFIG_DEBUG_VM
out:
#endif
- if (!s && (flags & SLAB_PANIC))
- panic("kmem_cache_create: Failed to create slab '%s'\n", name);
+ if (err) {
+
+ if (flags & SLAB_PANIC)
+ panic("kmem_cache_create: Failed to create slab '%s'. Error %d\n",
+ name, err);
+ else {
+ printk(KERN_WARNING "kmem_cache_create(%s) failed with error %d",
+ name, err);
+ dump_stack();
+ }
+
+ return NULL;
+ }
return s;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [05/19] Move list_add() to slab_common.c
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (3 preceding siblings ...)
2012-08-02 20:15 ` Common [04/19] Improve error handling in kmem_cache_create Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 21:18 ` David Rientjes
2012-08-02 20:15 ` Common [06/19] Extract a common function for kmem_cache_destroy Christoph Lameter
` (14 subsequent siblings)
19 siblings, 1 reply; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: move_list_add --]
[-- Type: text/plain, Size: 3287 bytes --]
Move the code to append the new kmem_cache to the list of slab caches to
the kmem_cache_create code in the shared code.
This is possible now since the acquisition of the mutex was moved into
kmem_cache_create().
Reviewed-by: Glauber Costa <glommer@parallels.com>
Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 7 +++++--
mm/slab_common.c | 7 +++++++
mm/slub.c | 2 --
3 files changed, 12 insertions(+), 4 deletions(-)
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:21:07.061676525 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:21:12.797779926 -0500
@@ -96,6 +96,13 @@
if (!s)
err = -ENOSYS; /* Until __kmem_cache_create returns code */
+ /*
+ * Check if the slab has actually been created and if it was a
+ * real instatiation. Aliases do not belong on the list
+ */
+ if (s && s->refcount == 1)
+ list_add(&s->list, &slab_caches);
+
#ifdef CONFIG_DEBUG_VM
out_locked:
#endif
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:20:20.276838502 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:21:12.797779926 -0500
@@ -1687,6 +1687,7 @@
ARCH_KMALLOC_FLAGS|SLAB_PANIC,
NULL);
+ list_add(&sizes[INDEX_AC].cs_cachep->list, &slab_caches);
if (INDEX_AC != INDEX_L3) {
sizes[INDEX_L3].cs_cachep =
__kmem_cache_create(names[INDEX_L3].name,
@@ -1694,6 +1695,7 @@
ARCH_KMALLOC_MINALIGN,
ARCH_KMALLOC_FLAGS|SLAB_PANIC,
NULL);
+ list_add(&sizes[INDEX_L3].cs_cachep->list, &slab_caches);
}
slab_early_init = 0;
@@ -1712,6 +1714,7 @@
ARCH_KMALLOC_MINALIGN,
ARCH_KMALLOC_FLAGS|SLAB_PANIC,
NULL);
+ list_add(&sizes->cs_cachep->list, &slab_caches);
}
#ifdef CONFIG_ZONE_DMA
sizes->cs_dmacachep = __kmem_cache_create(
@@ -1721,6 +1724,7 @@
ARCH_KMALLOC_FLAGS|SLAB_CACHE_DMA|
SLAB_PANIC,
NULL);
+ list_add(&sizes->cs_dmacachep->list, &slab_caches);
#endif
sizes++;
names++;
@@ -2590,6 +2594,7 @@
}
cachep->ctor = ctor;
cachep->name = name;
+ cachep->refcount = 1;
if (setup_cpu_cache(cachep, gfp)) {
__kmem_cache_destroy(cachep);
@@ -2606,8 +2611,6 @@
slab_set_debugobj_lock_classes(cachep);
}
- /* cache setup completed, link it into the list */
- list_add(&cachep->list, &slab_caches);
return cachep;
}
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:20:20.296838861 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:21:12.801780016 -0500
@@ -3968,7 +3968,6 @@
size, align, flags, ctor)) {
int r;
- list_add(&s->list, &slab_caches);
mutex_unlock(&slab_mutex);
r = sysfs_slab_add(s);
mutex_lock(&slab_mutex);
@@ -3976,7 +3975,6 @@
if (!r)
return s;
- list_del(&s->list);
kmem_cache_close(s);
}
kmem_cache_free(kmem_cache, s);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [06/19] Extract a common function for kmem_cache_destroy
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (4 preceding siblings ...)
2012-08-02 20:15 ` Common [05/19] Move list_add() to slab_common.c Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [07/19] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure Christoph Lameter
` (13 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: kmem_cache_destroy --]
[-- Type: text/plain, Size: 7170 bytes --]
kmem_cache_destroy does basically the same in all allocators.
Extract common code which is easy since we already have common mutex handling.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 55 +++----------------------------------------------------
mm/slab.h | 3 +++
mm/slab_common.c | 25 +++++++++++++++++++++++++
mm/slob.c | 11 +++++++----
mm/slub.c | 35 +++++++++++------------------------
5 files changed, 49 insertions(+), 80 deletions(-)
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:21:12.797779926 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:21:17.301860675 -0500
@@ -130,6 +130,31 @@
}
EXPORT_SYMBOL(kmem_cache_create);
+void kmem_cache_destroy(struct kmem_cache *s)
+{
+ get_online_cpus();
+ mutex_lock(&slab_mutex);
+ s->refcount--;
+ if (!s->refcount) {
+ list_del(&s->list);
+
+ if (!__kmem_cache_shutdown(s)) {
+ if (s->flags & SLAB_DESTROY_BY_RCU)
+ rcu_barrier();
+
+ __kmem_cache_destroy(s);
+ } else {
+ list_add(&s->list, &slab_caches);
+ printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n",
+ s->name);
+ dump_stack();
+ }
+ }
+ mutex_unlock(&slab_mutex);
+ put_online_cpus();
+}
+EXPORT_SYMBOL(kmem_cache_destroy);
+
int slab_is_available(void)
{
return slab_state >= UP;
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:21:12.797779926 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:21:17.301860675 -0500
@@ -810,16 +810,6 @@
*left_over = slab_size - nr_objs*buffer_size - mgmt_size;
}
-#define slab_error(cachep, msg) __slab_error(__func__, cachep, msg)
-
-static void __slab_error(const char *function, struct kmem_cache *cachep,
- char *msg)
-{
- printk(KERN_ERR "slab error in %s(): cache `%s': %s\n",
- function, cachep->name, msg);
- dump_stack();
-}
-
/*
* By default on NUMA we use alien caches to stage the freeing of
* objects allocated from other nodes. This causes massive memory
@@ -2213,7 +2203,7 @@
}
}
-static void __kmem_cache_destroy(struct kmem_cache *cachep)
+void __kmem_cache_destroy(struct kmem_cache *cachep)
{
int i;
struct kmem_list3 *l3;
@@ -2770,49 +2760,10 @@
}
EXPORT_SYMBOL(kmem_cache_shrink);
-/**
- * kmem_cache_destroy - delete a cache
- * @cachep: the cache to destroy
- *
- * Remove a &struct kmem_cache object from the slab cache.
- *
- * It is expected this function will be called by a module when it is
- * unloaded. This will remove the cache completely, and avoid a duplicate
- * cache being allocated each time a module is loaded and unloaded, if the
- * module doesn't have persistent in-kernel storage across loads and unloads.
- *
- * The cache must be empty before calling this function.
- *
- * The caller must guarantee that no one will allocate memory from the cache
- * during the kmem_cache_destroy().
- */
-void kmem_cache_destroy(struct kmem_cache *cachep)
+int __kmem_cache_shutdown(struct kmem_cache *cachep)
{
- BUG_ON(!cachep || in_interrupt());
-
- /* Find the cache in the chain of caches. */
- get_online_cpus();
- mutex_lock(&slab_mutex);
- /*
- * the chain is never empty, cache_cache is never destroyed
- */
- list_del(&cachep->list);
- if (__cache_shrink(cachep)) {
- slab_error(cachep, "Can't free all objects");
- list_add(&cachep->list, &slab_caches);
- mutex_unlock(&slab_mutex);
- put_online_cpus();
- return;
- }
-
- if (unlikely(cachep->flags & SLAB_DESTROY_BY_RCU))
- rcu_barrier();
-
- __kmem_cache_destroy(cachep);
- mutex_unlock(&slab_mutex);
- put_online_cpus();
+ return __cache_shrink(cachep);
}
-EXPORT_SYMBOL(kmem_cache_destroy);
/*
* Get the memory for a slab management obj.
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:20:19.400822800 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:21:17.301860675 -0500
@@ -30,4 +30,7 @@
struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *));
+int __kmem_cache_shutdown(struct kmem_cache *);
+void __kmem_cache_destroy(struct kmem_cache *);
+
#endif
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 14:20:19.416823086 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 14:21:17.301860675 -0500
@@ -538,14 +538,11 @@
return c;
}
-void kmem_cache_destroy(struct kmem_cache *c)
+void __kmem_cache_destroy(struct kmem_cache *c)
{
kmemleak_free(c);
- if (c->flags & SLAB_DESTROY_BY_RCU)
- rcu_barrier();
slob_free(c, sizeof(struct kmem_cache));
}
-EXPORT_SYMBOL(kmem_cache_destroy);
void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
{
@@ -613,6 +610,12 @@
}
EXPORT_SYMBOL(kmem_cache_size);
+int __kmem_cache_shutdown(struct kmem_cache *c)
+{
+ /* No way to check for remaining objects */
+ return 0;
+}
+
int kmem_cache_shrink(struct kmem_cache *d)
{
return 0;
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:21:12.801780016 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:21:17.305860748 -0500
@@ -624,7 +624,7 @@
print_trailer(s, page, object);
}
-static void slab_err(struct kmem_cache *s, struct page *page, char *fmt, ...)
+static void slab_err(struct kmem_cache *s, struct page *page, const char *fmt, ...)
{
va_list args;
char buf[100];
@@ -3139,7 +3139,7 @@
sizeof(long), GFP_ATOMIC);
if (!map)
return;
- slab_err(s, page, "%s", text);
+ slab_err(s, page, text, s->name);
slab_lock(page);
get_map(s, page, map);
@@ -3171,7 +3171,7 @@
discard_slab(s, page);
} else {
list_slab_objects(s, page,
- "Objects remaining on kmem_cache_close()");
+ "Objects remaining in %s on kmem_cache_close()");
}
}
}
@@ -3197,29 +3197,15 @@
return 0;
}
-/*
- * Close a cache and release the kmem_cache structure
- * (must be used for caches created using kmem_cache_create)
- */
-void kmem_cache_destroy(struct kmem_cache *s)
+int __kmem_cache_shutdown(struct kmem_cache *s)
{
- mutex_lock(&slab_mutex);
- s->refcount--;
- if (!s->refcount) {
- list_del(&s->list);
- mutex_unlock(&slab_mutex);
- if (kmem_cache_close(s)) {
- printk(KERN_ERR "SLUB %s: %s called for cache that "
- "still has objects.\n", s->name, __func__);
- dump_stack();
- }
- if (s->flags & SLAB_DESTROY_BY_RCU)
- rcu_barrier();
- sysfs_slab_remove(s);
- } else
- mutex_unlock(&slab_mutex);
+ return kmem_cache_close(s);
+}
+
+void __kmem_cache_destroy(struct kmem_cache *s)
+{
+ sysfs_slab_remove(s);
}
-EXPORT_SYMBOL(kmem_cache_destroy);
/********************************************************************
* Kmalloc subsystem
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [07/19] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure.
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (5 preceding siblings ...)
2012-08-02 20:15 ` Common [06/19] Extract a common function for kmem_cache_destroy Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [08/19] Move freeing of kmem_cache structure to common code Christoph Lameter
` (12 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: common_kmem_cache_name --]
[-- Type: text/plain, Size: 9153 bytes --]
Make all allocators use the "kmem_cache" slabname for the "kmem_cache" structure.
Reviewed-by: Glauber Costa <glommer@parallels.com>
Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 72 ++++++++++++++++++++++++++++---------------------------
mm/slab.h | 6 ++++
mm/slab_common.c | 1
mm/slob.c | 9 ++++++
mm/slub.c | 2 -
5 files changed, 52 insertions(+), 38 deletions(-)
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:11:00.322806409 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:11:04.186875599 -0500
@@ -585,9 +585,9 @@
{ {0, BOOT_CPUCACHE_ENTRIES, 1, 0} };
/* internal cache of cache description objs */
-static struct kmem_list3 *cache_cache_nodelists[MAX_NUMNODES];
-static struct kmem_cache cache_cache = {
- .nodelists = cache_cache_nodelists,
+static struct kmem_list3 *kmem_cache_nodelists[MAX_NUMNODES];
+static struct kmem_cache kmem_cache_boot = {
+ .nodelists = kmem_cache_nodelists,
.batchcount = 1,
.limit = BOOT_CPUCACHE_ENTRIES,
.shared = 1,
@@ -1591,15 +1591,17 @@
int order;
int node;
+ kmem_cache = &kmem_cache_boot;
+
if (num_possible_nodes() == 1)
use_alien_caches = 0;
for (i = 0; i < NUM_INIT_LISTS; i++) {
kmem_list3_init(&initkmem_list3[i]);
if (i < MAX_NUMNODES)
- cache_cache.nodelists[i] = NULL;
+ kmem_cache->nodelists[i] = NULL;
}
- set_up_list3s(&cache_cache, CACHE_CACHE);
+ set_up_list3s(kmem_cache, CACHE_CACHE);
/*
* Fragmentation resistance on low memory - only use bigger
@@ -1611,9 +1613,9 @@
/* Bootstrap is tricky, because several objects are allocated
* from caches that do not exist yet:
- * 1) initialize the cache_cache cache: it contains the struct
- * kmem_cache structures of all caches, except cache_cache itself:
- * cache_cache is statically allocated.
+ * 1) initialize the kmem_cache cache: it contains the struct
+ * kmem_cache structures of all caches, except kmem_cache itself:
+ * kmem_cache is statically allocated.
* Initially an __init data area is used for the head array and the
* kmem_list3 structures, it's replaced with a kmalloc allocated
* array at the end of the bootstrap.
@@ -1622,43 +1624,43 @@
* An __init data area is used for the head array.
* 3) Create the remaining kmalloc caches, with minimally sized
* head arrays.
- * 4) Replace the __init data head arrays for cache_cache and the first
+ * 4) Replace the __init data head arrays for kmem_cache and the first
* kmalloc cache with kmalloc allocated arrays.
- * 5) Replace the __init data for kmem_list3 for cache_cache and
+ * 5) Replace the __init data for kmem_list3 for kmem_cache and
* the other cache's with kmalloc allocated memory.
* 6) Resize the head arrays of the kmalloc caches to their final sizes.
*/
node = numa_mem_id();
- /* 1) create the cache_cache */
+ /* 1) create the kmem_cache */
INIT_LIST_HEAD(&slab_caches);
- list_add(&cache_cache.list, &slab_caches);
- cache_cache.colour_off = cache_line_size();
- cache_cache.array[smp_processor_id()] = &initarray_cache.cache;
- cache_cache.nodelists[node] = &initkmem_list3[CACHE_CACHE + node];
+ list_add(&kmem_cache->list, &slab_caches);
+ kmem_cache->colour_off = cache_line_size();
+ kmem_cache->array[smp_processor_id()] = &initarray_cache.cache;
+ kmem_cache->nodelists[node] = &initkmem_list3[CACHE_CACHE + node];
/*
* struct kmem_cache size depends on nr_node_ids & nr_cpu_ids
*/
- cache_cache.size = offsetof(struct kmem_cache, array[nr_cpu_ids]) +
+ kmem_cache->size = offsetof(struct kmem_cache, array[nr_cpu_ids]) +
nr_node_ids * sizeof(struct kmem_list3 *);
- cache_cache.object_size = cache_cache.size;
- cache_cache.size = ALIGN(cache_cache.size,
+ kmem_cache->object_size = kmem_cache->size;
+ kmem_cache->size = ALIGN(kmem_cache->object_size,
cache_line_size());
- cache_cache.reciprocal_buffer_size =
- reciprocal_value(cache_cache.size);
+ kmem_cache->reciprocal_buffer_size =
+ reciprocal_value(kmem_cache->size);
for (order = 0; order < MAX_ORDER; order++) {
- cache_estimate(order, cache_cache.size,
- cache_line_size(), 0, &left_over, &cache_cache.num);
- if (cache_cache.num)
+ cache_estimate(order, kmem_cache->size,
+ cache_line_size(), 0, &left_over, &kmem_cache->num);
+ if (kmem_cache->num)
break;
}
- BUG_ON(!cache_cache.num);
- cache_cache.gfporder = order;
- cache_cache.colour = left_over / cache_cache.colour_off;
- cache_cache.slab_size = ALIGN(cache_cache.num * sizeof(kmem_bufctl_t) +
+ BUG_ON(!kmem_cache->num);
+ kmem_cache->gfporder = order;
+ kmem_cache->colour = left_over / kmem_cache->colour_off;
+ kmem_cache->slab_size = ALIGN(kmem_cache->num * sizeof(kmem_bufctl_t) +
sizeof(struct slab), cache_line_size());
/* 2+3) create the kmalloc caches */
@@ -1725,15 +1727,15 @@
ptr = kmalloc(sizeof(struct arraycache_init), GFP_NOWAIT);
- BUG_ON(cpu_cache_get(&cache_cache) != &initarray_cache.cache);
- memcpy(ptr, cpu_cache_get(&cache_cache),
+ BUG_ON(cpu_cache_get(kmem_cache) != &initarray_cache.cache);
+ memcpy(ptr, cpu_cache_get(kmem_cache),
sizeof(struct arraycache_init));
/*
* Do not assume that spinlocks can be initialized via memcpy:
*/
spin_lock_init(&ptr->lock);
- cache_cache.array[smp_processor_id()] = ptr;
+ kmem_cache->array[smp_processor_id()] = ptr;
ptr = kmalloc(sizeof(struct arraycache_init), GFP_NOWAIT);
@@ -1754,7 +1756,7 @@
int nid;
for_each_online_node(nid) {
- init_list(&cache_cache, &initkmem_list3[CACHE_CACHE + nid], nid);
+ init_list(kmem_cache, &initkmem_list3[CACHE_CACHE + nid], nid);
init_list(malloc_sizes[INDEX_AC].cs_cachep,
&initkmem_list3[SIZE_AC + nid], nid);
@@ -2220,7 +2222,7 @@
kfree(l3);
}
}
- kmem_cache_free(&cache_cache, cachep);
+ kmem_cache_free(kmem_cache, cachep);
}
@@ -2470,7 +2472,7 @@
gfp = GFP_NOWAIT;
/* Get cache's description obj. */
- cachep = kmem_cache_zalloc(&cache_cache, gfp);
+ cachep = kmem_cache_zalloc(kmem_cache, gfp);
if (!cachep)
return NULL;
@@ -2528,7 +2530,7 @@
if (!cachep->num) {
printk(KERN_ERR
"kmem_cache_create: couldn't create cache %s.\n", name);
- kmem_cache_free(&cache_cache, cachep);
+ kmem_cache_free(kmem_cache, cachep);
return NULL;
}
slab_size = ALIGN(cachep->num * sizeof(kmem_bufctl_t)
@@ -3296,7 +3298,7 @@
static bool slab_should_failslab(struct kmem_cache *cachep, gfp_t flags)
{
- if (cachep == &cache_cache)
+ if (cachep == kmem_cache)
return false;
return should_failslab(cachep->object_size, flags, cachep->flags);
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:11:00.322806409 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:11:04.186875599 -0500
@@ -25,8 +25,14 @@
/* The slab cache mutex protects the management structures during changes */
extern struct mutex slab_mutex;
+
+/* The list of all slab caches on the system */
extern struct list_head slab_caches;
+/* The slab cache that manages slab cache information */
+extern struct kmem_cache *kmem_cache;
+
+/* Functions provided by the slab allocators */
struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *));
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:11:00.322806409 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:11:04.186875599 -0500
@@ -22,6 +22,7 @@
enum slab_state slab_state;
LIST_HEAD(slab_caches);
DEFINE_MUTEX(slab_mutex);
+struct kmem_cache *kmem_cache;
/*
* kmem_cache_create - Create a cache.
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:11:00.322806409 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:11:04.186875599 -0500
@@ -3214,8 +3214,6 @@
struct kmem_cache *kmalloc_caches[SLUB_PAGE_SHIFT];
EXPORT_SYMBOL(kmalloc_caches);
-static struct kmem_cache *kmem_cache;
-
#ifdef CONFIG_ZONE_DMA
static struct kmem_cache *kmalloc_dma_caches[SLUB_PAGE_SHIFT];
#endif
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 14:11:00.322806409 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 14:11:04.186875599 -0500
@@ -622,8 +622,16 @@
}
EXPORT_SYMBOL(kmem_cache_shrink);
+struct kmem_cache kmem_cache_boot = {
+ .name = "kmem_cache",
+ .size = sizeof(struct kmem_cache),
+ .flags = SLAB_PANIC,
+ .align = ARCH_KMALLOC_MINALIGN,
+};
+
void __init kmem_cache_init(void)
{
+ kmem_cache = &kmem_cache_boot;
slab_state = UP;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [08/19] Move freeing of kmem_cache structure to common code
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (6 preceding siblings ...)
2012-08-02 20:15 ` Common [07/19] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [09/19] Get rid of __kmem_cache_destroy Christoph Lameter
` (11 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: common_kmem_cache_destroy --]
[-- Type: text/plain, Size: 2322 bytes --]
The freeing action is basically the same in all slab allocators.
Move to the common kmem_cache_destroy() function.
Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 1 -
mm/slab_common.c | 1 +
mm/slob.c | 2 --
mm/slub.c | 1 -
4 files changed, 1 insertion(+), 4 deletions(-)
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:20:05.688577022 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:20:06.964599888 -0500
@@ -144,6 +144,7 @@
rcu_barrier();
__kmem_cache_destroy(s);
+ kmem_cache_free(kmem_cache, s);
} else {
list_add(&s->list, &slab_caches);
printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n",
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 14:20:05.692577093 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 14:20:06.968599959 -0500
@@ -540,8 +540,6 @@
void __kmem_cache_destroy(struct kmem_cache *c)
{
- kmemleak_free(c);
- slob_free(c, sizeof(struct kmem_cache));
}
void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:20:05.688577022 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:20:06.968599959 -0500
@@ -2222,7 +2222,6 @@
kfree(l3);
}
}
- kmem_cache_free(kmem_cache, cachep);
}
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:20:05.692577093 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:20:06.968599959 -0500
@@ -213,7 +213,6 @@
static inline void sysfs_slab_remove(struct kmem_cache *s)
{
kfree(s->name);
- kmem_cache_free(kmem_cache, s);
}
#endif
@@ -5325,7 +5324,6 @@
kobject_del(&s->kobj);
kobject_put(&s->kobj);
kfree(s->name);
- kmem_cache_free(kmem_cache, s);
}
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [09/19] Get rid of __kmem_cache_destroy
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (7 preceding siblings ...)
2012-08-02 20:15 ` Common [08/19] Move freeing of kmem_cache structure to common code Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [10/19] Move duping of slab name to slab_common.c Christoph Lameter
` (10 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: no_slab_specific_kmem_cache_destroy --]
[-- Type: text/plain, Size: 4072 bytes --]
What is done there can be done in __kmem_cache_shutdown.
This affects RCU handling somewhat. On rcu free all slab allocators
do not refer to other management structures than the kmem_cache structure.
Therefore these other structures can be freed before the rcu deferred
free to the page allocator occurs.
Reviewed-by: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 43 +++++++++++++++++++++----------------------
mm/slab.h | 1 -
mm/slab_common.c | 1 -
mm/slob.c | 4 ----
mm/slub.c | 10 +++++-----
5 files changed, 26 insertions(+), 33 deletions(-)
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 14:20:06.968599959 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 14:20:14.452734107 -0500
@@ -538,10 +538,6 @@
return c;
}
-void __kmem_cache_destroy(struct kmem_cache *c)
-{
-}
-
void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
{
void *b;
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:20:06.968599959 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:20:14.456734180 -0500
@@ -3198,12 +3198,12 @@
int __kmem_cache_shutdown(struct kmem_cache *s)
{
- return kmem_cache_close(s);
-}
+ int rc = kmem_cache_close(s);
-void __kmem_cache_destroy(struct kmem_cache *s)
-{
- sysfs_slab_remove(s);
+ if (!rc)
+ sysfs_slab_remove(s);
+
+ return rc;
}
/********************************************************************
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:20:06.968599959 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:20:14.456734180 -0500
@@ -2205,26 +2205,6 @@
}
}
-void __kmem_cache_destroy(struct kmem_cache *cachep)
-{
- int i;
- struct kmem_list3 *l3;
-
- for_each_online_cpu(i)
- kfree(cachep->array[i]);
-
- /* NUMA: free the list3 structures */
- for_each_online_node(i) {
- l3 = cachep->nodelists[i];
- if (l3) {
- kfree(l3->shared);
- free_alien_cache(l3->alien);
- kfree(l3);
- }
- }
-}
-
-
/**
* calculate_slab_order - calculate size (page order) of slabs
* @cachep: pointer to the cache that is being created
@@ -2588,7 +2568,7 @@
cachep->refcount = 1;
if (setup_cpu_cache(cachep, gfp)) {
- __kmem_cache_destroy(cachep);
+ __kmem_cache_shutdown(cachep);
return NULL;
}
@@ -2763,7 +2743,26 @@
int __kmem_cache_shutdown(struct kmem_cache *cachep)
{
- return __cache_shrink(cachep);
+ int i;
+ struct kmem_list3 *l3;
+ int rc = __cache_shrink(cachep);
+
+ if (rc)
+ return rc;
+
+ for_each_online_cpu(i)
+ kfree(cachep->array[i]);
+
+ /* NUMA: free the list3 structures */
+ for_each_online_node(i) {
+ l3 = cachep->nodelists[i];
+ if (l3) {
+ kfree(l3->shared);
+ free_alien_cache(l3->alien);
+ kfree(l3);
+ }
+ }
+ return 0;
}
/*
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:20:05.688577022 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:20:14.456734180 -0500
@@ -37,6 +37,5 @@
size_t align, unsigned long flags, void (*ctor)(void *));
int __kmem_cache_shutdown(struct kmem_cache *);
-void __kmem_cache_destroy(struct kmem_cache *);
#endif
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:20:06.964599888 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:20:14.456734180 -0500
@@ -143,7 +143,6 @@
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
- __kmem_cache_destroy(s);
kmem_cache_free(kmem_cache, s);
} else {
list_add(&s->list, &slab_caches);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [10/19] Move duping of slab name to slab_common.c
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (8 preceding siblings ...)
2012-08-02 20:15 ` Common [09/19] Get rid of __kmem_cache_destroy Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [11/19] Do slab aliasing call from common code Christoph Lameter
` (9 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: dup_name_in_common --]
[-- Type: text/plain, Size: 3830 bytes --]
Duping of the slabname has to be done by each slab. Moving this code
to slab_common avoids duplicate implementations.
With this patch we have common string handling for all slab allocators.
Strings passed to kmem_cache_create() are copied internally. Subsystems
can create temporary strings to create slab caches.
Slabs allocated in early states of bootstrap will never be freed (and those
can never be freed since they are essential to slab allocator operations).
During bootstrap we therefore do not have to worry about duping names.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab_common.c | 24 +++++++++++++++++-------
mm/slub.c | 5 -----
2 files changed, 17 insertions(+), 12 deletions(-)
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:21:24.000000000 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:22:59.087685489 -0500
@@ -54,6 +54,7 @@
{
struct kmem_cache *s;
int err = 0;
+ char *n;
#ifdef CONFIG_DEBUG_VM
if (!name || in_interrupt() || size < sizeof(void *) ||
@@ -93,16 +94,26 @@
WARN_ON(strchr(name, ' ')); /* It confuses parsers */
#endif
- s = __kmem_cache_create(name, size, align, flags, ctor);
- if (!s)
- err = -ENOSYS; /* Until __kmem_cache_create returns code */
+ n = kstrdup(name, GFP_KERNEL);
+ if (!n) {
+ err = -ENOMEM;
+ goto out_locked;
+ }
+
+ s = __kmem_cache_create(n, size, align, flags, ctor);
- /*
- * Check if the slab has actually been created and if it was a
- * real instatiation. Aliases do not belong on the list
- */
- if (s && s->refcount == 1)
- list_add(&s->list, &slab_caches);
+ if (s) {
+ /*
+ * Check if the slab has actually been created and if it was a
+ * real instatiation. Aliases do not belong on the list
+ */
+ if (s->refcount == 1)
+ list_add(&s->list, &slab_caches);
+
+ } else {
+ kfree(n);
+ err = -ENOSYS; /* Until __kmem_cache_create returns code */
+ }
#ifdef CONFIG_DEBUG_VM
out_locked:
@@ -143,6 +154,7 @@
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
+ kfree(s->name);
kmem_cache_free(kmem_cache, s);
} else {
list_add(&s->list, &slab_caches);
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:21:24.837995817 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:21:30.678100549 -0500
@@ -210,10 +210,7 @@
static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; }
static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p)
{ return 0; }
-static inline void sysfs_slab_remove(struct kmem_cache *s)
-{
- kfree(s->name);
-}
+static inline void sysfs_slab_remove(struct kmem_cache *s) { }
#endif
@@ -3922,7 +3919,6 @@
size_t align, unsigned long flags, void (*ctor)(void *))
{
struct kmem_cache *s;
- char *n;
s = find_mergeable(size, align, flags, name, ctor);
if (s) {
@@ -3941,13 +3937,9 @@
return s;
}
- n = kstrdup(name, GFP_KERNEL);
- if (!n)
- return NULL;
-
s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
if (s) {
- if (kmem_cache_open(s, n,
+ if (kmem_cache_open(s, name,
size, align, flags, ctor)) {
int r;
@@ -3962,7 +3954,6 @@
}
kmem_cache_free(kmem_cache, s);
}
- kfree(n);
return NULL;
}
@@ -5197,7 +5188,6 @@
{
struct kmem_cache *s = to_slab(kobj);
- kfree(s->name);
kmem_cache_free(kmem_cache, s);
}
@@ -5323,7 +5313,6 @@
kobject_uevent(&s->kobj, KOBJ_REMOVE);
kobject_del(&s->kobj);
kobject_put(&s->kobj);
- kfree(s->name);
}
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [11/19] Do slab aliasing call from common code
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (9 preceding siblings ...)
2012-08-02 20:15 ` Common [10/19] Move duping of slab name to slab_common.c Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [12/19] Move sysfs_slab_add to common Christoph Lameter
` (8 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: slab_alias_common --]
[-- Type: text/plain, Size: 3418 bytes --]
The slab aliasing logic causes some strange contortions in
slub. So add a call to deal with aliases to slab_common.c
but disable it for other slab allocators by providng stubs
that fail to create aliases.
Full general support for aliases will require additional
cleanup passes and more standardization of fields in
kmem_cache.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.h | 10 ++++++++++
mm/slab_common.c | 16 +++++++---------
mm/slub.c | 18 ++++++++++++------
3 files changed, 29 insertions(+), 15 deletions(-)
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:21:24.841995858 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:23:08.071846583 -0500
@@ -36,6 +36,16 @@
struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *));
+#ifdef CONFIG_SLUB
+struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
+ size_t align, unsigned long flags, void (*ctor)(void *));
+#else
+static inline struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
+ size_t align, unsigned long flags, void (*ctor)(void *))
+{ return NULL; }
+#endif
+
+
int __kmem_cache_shutdown(struct kmem_cache *);
#endif
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:22:59.087685489 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:23:08.071846583 -0500
@@ -94,6 +94,10 @@
WARN_ON(strchr(name, ' ')); /* It confuses parsers */
#endif
+ s = __kmem_cache_alias(name, size, align, flags, ctor);
+ if (s)
+ goto out_locked;
+
n = kstrdup(name, GFP_KERNEL);
if (!n) {
err = -ENOMEM;
@@ -115,9 +119,7 @@
err = -ENOSYS; /* Until __kmem_cache_create returns code */
}
-#ifdef CONFIG_DEBUG_VM
out_locked:
-#endif
mutex_unlock(&slab_mutex);
put_online_cpus();
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:21:30.678100549 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:23:08.075846653 -0500
@@ -3701,7 +3701,7 @@
slub_max_order = 0;
kmem_size = offsetof(struct kmem_cache, node) +
- nr_node_ids * sizeof(struct kmem_cache_node *);
+ nr_node_ids * sizeof(struct kmem_cache_node *);
/* Allocate two kmem_caches from the page allocator */
kmalloc_size = ALIGN(kmem_size, cache_line_size());
@@ -3915,7 +3915,7 @@
return NULL;
}
-struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *))
{
struct kmem_cache *s;
@@ -3932,11 +3932,18 @@
if (sysfs_slab_alias(s, name)) {
s->refcount--;
- return NULL;
+ s = NULL;
}
- return s;
}
+ return s;
+}
+
+struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+ size_t align, unsigned long flags, void (*ctor)(void *))
+{
+ struct kmem_cache *s;
+
s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
if (s) {
if (kmem_cache_open(s, name,
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [12/19] Move sysfs_slab_add to common
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (10 preceding siblings ...)
2012-08-02 20:15 ` Common [11/19] Do slab aliasing call from common code Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [13/19] slub: Use a statically allocated kmem_cache boot structure for bootstrap Christoph Lameter
` (7 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: move_sysfs_slab_add --]
[-- Type: text/plain, Size: 2696 bytes --]
Simplify locking by moving the slab_add_sysfs after all locks
have been dropped. Eases the upcoming move to provide sysfs
support for all allocators.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:23:08.071846583 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:23:11.463907400 -0500
@@ -39,10 +39,13 @@
#ifdef CONFIG_SLUB
struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *));
+extern int sysfs_slab_add(struct kmem_cache *s);
#else
static inline struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *))
{ return NULL; }
+static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; }
+
#endif
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:23:08.071846583 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:23:11.463907400 -0500
@@ -140,6 +140,9 @@
return NULL;
}
+ if (s->refcount == 1)
+ sysfs_slab_add(s);
+
return s;
}
EXPORT_SYMBOL(kmem_cache_create);
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:23:08.075846653 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:23:11.467907468 -0500
@@ -202,12 +202,10 @@
enum track_item { TRACK_ALLOC, TRACK_FREE };
#ifdef CONFIG_SYSFS
-static int sysfs_slab_add(struct kmem_cache *);
static int sysfs_slab_alias(struct kmem_cache *, const char *);
static void sysfs_slab_remove(struct kmem_cache *);
#else
-static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; }
static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p)
{ return 0; }
static inline void sysfs_slab_remove(struct kmem_cache *s) { }
@@ -3948,16 +3946,7 @@
if (s) {
if (kmem_cache_open(s, name,
size, align, flags, ctor)) {
- int r;
-
- mutex_unlock(&slab_mutex);
- r = sysfs_slab_add(s);
- mutex_lock(&slab_mutex);
-
- if (!r)
- return s;
-
- kmem_cache_close(s);
+ return s;
}
kmem_cache_free(kmem_cache, s);
}
@@ -5259,7 +5248,7 @@
return name;
}
-static int sysfs_slab_add(struct kmem_cache *s)
+int sysfs_slab_add(struct kmem_cache *s)
{
int err;
const char *name;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [13/19] slub: Use a statically allocated kmem_cache boot structure for bootstrap
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (11 preceding siblings ...)
2012-08-02 20:15 ` Common [12/19] Move sysfs_slab_add to common Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [14/19] slub: Introduce function for opening boot caches Christoph Lameter
` (6 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: slub_static_init --]
[-- Type: text/plain, Size: 3240 bytes --]
Simplify bootstrap by statically allocated two kmem_cache structures. These are
freed after bootup is complete. Allows us to no longer worry about calculations
of sizes of kmem_cache structures during bootstrap.
Reviewed-by: Glauber Costa <glommer@parallels.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 32 +++++++++-----------------------
1 file changed, 9 insertions(+), 23 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-01 14:52:12.944165176 -0500
+++ linux-2.6/mm/slub.c 2012-08-01 14:57:05.201430270 -0500
@@ -3686,13 +3686,13 @@
}
}
+static __initdata struct kmem_cache boot_kmem_cache,
+ boot_kmem_cache_node;
+
void __init kmem_cache_init(void)
{
int i;
- int caches = 0;
- struct kmem_cache *temp_kmem_cache;
- int order;
- struct kmem_cache *temp_kmem_cache_node;
+ int caches = 2;
unsigned long kmalloc_size;
if (debug_guardpage_minorder())
@@ -3701,19 +3701,10 @@
kmem_size = offsetof(struct kmem_cache, node) +
nr_node_ids * sizeof(struct kmem_cache_node *);
- /* Allocate two kmem_caches from the page allocator */
kmalloc_size = ALIGN(kmem_size, cache_line_size());
- order = get_order(2 * kmalloc_size);
- kmem_cache = (void *)__get_free_pages(GFP_NOWAIT, order);
-
- /*
- * Must first have the slab cache available for the allocations of the
- * struct kmem_cache_node's. There is special bootstrap code in
- * kmem_cache_open for slab_state == DOWN.
- */
- kmem_cache_node = (void *)kmem_cache + kmalloc_size;
+ kmem_cache_node = &boot_kmem_cache_node;
- kmem_cache_open(kmem_cache_node, "kmem_cache_node",
+ kmem_cache_open(&boot_kmem_cache_node, "kmem_cache_node",
sizeof(struct kmem_cache_node),
0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
@@ -3722,29 +3713,21 @@
/* Able to allocate the per node structures */
slab_state = PARTIAL;
- temp_kmem_cache = kmem_cache;
- kmem_cache_open(kmem_cache, "kmem_cache", kmem_size,
+ kmem_cache_open(&boot_kmem_cache, "kmem_cache", kmem_size,
0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
- kmem_cache = kmem_cache_alloc(kmem_cache, GFP_NOWAIT);
- memcpy(kmem_cache, temp_kmem_cache, kmem_size);
+ kmem_cache = kmem_cache_alloc(&boot_kmem_cache, GFP_NOWAIT);
+ memcpy(kmem_cache, &boot_kmem_cache, kmem_size);
/*
* Allocate kmem_cache_node properly from the kmem_cache slab.
* kmem_cache_node is separately allocated so no need to
* update any list pointers.
*/
- temp_kmem_cache_node = kmem_cache_node;
-
kmem_cache_node = kmem_cache_alloc(kmem_cache, GFP_NOWAIT);
- memcpy(kmem_cache_node, temp_kmem_cache_node, kmem_size);
+ memcpy(kmem_cache_node, &boot_kmem_cache_node, kmem_size);
kmem_cache_bootstrap_fixup(kmem_cache_node);
-
- caches++;
kmem_cache_bootstrap_fixup(kmem_cache);
- caches++;
- /* Free temporary boot structure */
- free_pages((unsigned long)temp_kmem_cache, order);
/* Now we can use the kmem_cache to allocate kmalloc slabs */
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [14/19] slub: Introduce function for opening boot caches
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (12 preceding siblings ...)
2012-08-02 20:15 ` Common [13/19] slub: Use a statically allocated kmem_cache boot structure for bootstrap Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [15/19] create common create_kmalloc_cache() Christoph Lameter
` (5 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: slub_create_open_boot --]
[-- Type: text/plain, Size: 1891 bytes --]
Basically the same thing happens for various boot caches.
Provide a function.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:56:03.915144783 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:56:28.451584021 -0500
@@ -3271,6 +3271,12 @@ panic:
return NULL;
}
+static int kmem_cache_open_boot(struct kmem_cache *s, const char *name, int size)
+{
+ return kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
+ SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
+}
+
/*
* Conversion table for small slabs sizes / 8 to the index in the
* kmalloc array. This is necessary for slabs < 192 since we have non power
@@ -3704,17 +3710,15 @@ void __init kmem_cache_init(void)
kmalloc_size = ALIGN(kmem_size, cache_line_size());
kmem_cache_node = &boot_kmem_cache_node;
- kmem_cache_open(&boot_kmem_cache_node, "kmem_cache_node",
- sizeof(struct kmem_cache_node),
- 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
+ kmem_cache_open_boot(kmem_cache_node, "kmem_cache_node",
+ sizeof(struct kmem_cache_node));
hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
/* Able to allocate the per node structures */
slab_state = PARTIAL;
- kmem_cache_open(&boot_kmem_cache, "kmem_cache", kmem_size,
- 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
+ kmem_cache_open_boot(&boot_kmem_cache, "kmem_cache", kmem_size);
kmem_cache = kmem_cache_alloc(&boot_kmem_cache, GFP_NOWAIT);
memcpy(kmem_cache, &boot_kmem_cache, kmem_size);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [15/19] create common create_kmalloc_cache()
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (13 preceding siblings ...)
2012-08-02 20:15 ` Common [14/19] slub: Introduce function for opening boot caches Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [16/19] Move kmem_cache allocations into common code Christoph Lameter
` (4 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: extract_create_kmalloc_cache --]
[-- Type: text/plain, Size: 4160 bytes --]
Use a special function to create kmalloc caches and use that function in
SLAB and SLUB.
Reviewed-by: Glauber Costa <glommer@parallels.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 53 +++++++++++++++++++++++------------------------------
1 file changed, 23 insertions(+), 30 deletions(-)
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:58:11.553429908 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:58:17.657539215 -0500
@@ -1673,22 +1673,13 @@ void __init kmem_cache_init(void)
* bug.
*/
- sizes[INDEX_AC].cs_cachep = __kmem_cache_create(names[INDEX_AC].name,
- sizes[INDEX_AC].cs_size,
- ARCH_KMALLOC_MINALIGN,
- ARCH_KMALLOC_FLAGS|SLAB_PANIC,
- NULL);
+ sizes[INDEX_AC].cs_cachep = create_kmalloc_cache(names[INDEX_AC].name,
+ sizes[INDEX_AC].cs_size, ARCH_KMALLOC_FLAGS);
- list_add(&sizes[INDEX_AC].cs_cachep->list, &slab_caches);
- if (INDEX_AC != INDEX_L3) {
+ if (INDEX_AC != INDEX_L3)
sizes[INDEX_L3].cs_cachep =
- __kmem_cache_create(names[INDEX_L3].name,
- sizes[INDEX_L3].cs_size,
- ARCH_KMALLOC_MINALIGN,
- ARCH_KMALLOC_FLAGS|SLAB_PANIC,
- NULL);
- list_add(&sizes[INDEX_L3].cs_cachep->list, &slab_caches);
- }
+ create_kmalloc_cache(names[INDEX_L3].name,
+ sizes[INDEX_L3].cs_size, ARCH_KMALLOC_FLAGS);
slab_early_init = 0;
@@ -1700,23 +1691,14 @@ void __init kmem_cache_init(void)
* Note for systems short on memory removing the alignment will
* allow tighter packing of the smaller caches.
*/
- if (!sizes->cs_cachep) {
- sizes->cs_cachep = __kmem_cache_create(names->name,
- sizes->cs_size,
- ARCH_KMALLOC_MINALIGN,
- ARCH_KMALLOC_FLAGS|SLAB_PANIC,
- NULL);
- list_add(&sizes->cs_cachep->list, &slab_caches);
- }
+ if (!sizes->cs_cachep)
+ sizes->cs_cachep = create_kmalloc_cache(names->name,
+ sizes->cs_size, ARCH_KMALLOC_FLAGS);
+
#ifdef CONFIG_ZONE_DMA
- sizes->cs_dmacachep = __kmem_cache_create(
- names->name_dma,
- sizes->cs_size,
- ARCH_KMALLOC_MINALIGN,
- ARCH_KMALLOC_FLAGS|SLAB_CACHE_DMA|
- SLAB_PANIC,
- NULL);
- list_add(&sizes->cs_dmacachep->list, &slab_caches);
+ sizes->cs_dmacachep = create_kmalloc_cache(
+ names->name_dma, sizes->cs_size,
+ SLAB_CACHE_DMA|ARCH_KMALLOC_FLAGS);
#endif
sizes++;
names++;
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:58:11.541429693 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:58:17.657539215 -0500
@@ -33,9 +33,12 @@ extern struct list_head slab_caches;
extern struct kmem_cache *kmem_cache;
/* Functions provided by the slab allocators */
-struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+extern struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *));
+extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
+ unsigned long flags);
+
#ifdef CONFIG_SLUB
struct kmem_cache *__kmem_cache_alias(const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *));
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:58:11.561430050 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:58:17.657539215 -0500
@@ -177,3 +177,21 @@ int slab_is_available(void)
{
return slab_state >= UP;
}
+
+struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
+ unsigned long flags)
+{
+ struct kmem_cache *s;
+
+ s = __kmem_cache_create(name, size, ARCH_KMALLOC_MINALIGN,
+ flags, NULL);
+
+ if (s) {
+ list_add(&s->list, &slab_caches);
+ return s;
+ }
+
+ panic("Creation of kmalloc slab %s size=%ld failed.\n", name, size);
+ return NULL;
+}
+
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [16/19] Move kmem_cache allocations into common code.
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (14 preceding siblings ...)
2012-08-02 20:15 ` Common [15/19] create common create_kmalloc_cache() Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [17/19] Shrink __kmem_cache_create() parameter lists Christoph Lameter
` (3 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: kmem_alloc_common --]
[-- Type: text/plain, Size: 7540 bytes --]
Shift the allocations to common code. That way the allocation
and freeing of the kmem_cache structures is handled by common code.
V1-V2: Use the return code from setup_cpucache() in slab instead of returning -ENOSPC
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:23:17.660018494 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:23:29.056222811 -0500
@@ -2338,13 +2338,13 @@
* cacheline. This can be beneficial if you're counting cycles as closely
* as davem.
*/
-struct kmem_cache *
-__kmem_cache_create (const char *name, size_t size, size_t align,
+int
+__kmem_cache_create (struct kmem_cache *cachep, const char *name, size_t size, size_t align,
unsigned long flags, void (*ctor)(void *))
{
size_t left_over, slab_size, ralign;
- struct kmem_cache *cachep = NULL;
gfp_t gfp;
+ int err;
#if DEBUG
#if FORCED_DEBUG
@@ -2432,11 +2432,6 @@
else
gfp = GFP_NOWAIT;
- /* Get cache's description obj. */
- cachep = kmem_cache_zalloc(kmem_cache, gfp);
- if (!cachep)
- return NULL;
-
cachep->nodelists = (struct kmem_list3 **)&cachep->array[nr_cpu_ids];
cachep->object_size = size;
cachep->align = align;
@@ -2491,8 +2486,7 @@
if (!cachep->num) {
printk(KERN_ERR
"kmem_cache_create: couldn't create cache %s.\n", name);
- kmem_cache_free(kmem_cache, cachep);
- return NULL;
+ return -E2BIG;
}
slab_size = ALIGN(cachep->num * sizeof(kmem_bufctl_t)
+ sizeof(struct slab), align);
@@ -2549,9 +2543,10 @@
cachep->name = name;
cachep->refcount = 1;
- if (setup_cpu_cache(cachep, gfp)) {
+ err = setup_cpu_cache(cachep, gfp);
+ if (err) {
__kmem_cache_shutdown(cachep);
- return NULL;
+ return err;
}
if (flags & SLAB_DEBUG_OBJECTS) {
@@ -2564,7 +2559,7 @@
slab_set_debugobj_lock_classes(cachep);
}
- return cachep;
+ return 0;
}
#if DEBUG
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 14:23:17.000000000 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 14:23:29.056222811 -0500
@@ -33,8 +33,8 @@
extern struct kmem_cache *kmem_cache;
/* Functions provided by the slab allocators */
-extern struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
- size_t align, unsigned long flags, void (*ctor)(void *));
+extern int __kmem_cache_create(struct kmem_cache *, const char *name,
+ size_t size, size_t align, unsigned long flags, void (*ctor)(void *));
extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
unsigned long flags);
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:23:17.000000000 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:26:20.039275929 -0500
@@ -104,19 +104,23 @@
goto out_locked;
}
- s = __kmem_cache_create(n, size, align, flags, ctor);
+ s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
if (s) {
- /*
- * Check if the slab has actually been created and if it was a
- * real instatiation. Aliases do not belong on the list
- */
- if (s->refcount == 1)
+ err = __kmem_cache_create(s, n, size, align, flags, ctor);
+
+ if (!err)
+
list_add(&s->list, &slab_caches);
+ else {
+ kfree(n);
+ kmem_cache_free(kmem_cache, s);
+ }
+
} else {
kfree(n);
- err = -ENOSYS; /* Until __kmem_cache_create returns code */
+ err = -ENOMEM;
}
out_locked:
@@ -181,17 +185,21 @@
struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
unsigned long flags)
{
- struct kmem_cache *s;
+ struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
+ int r = -ENOMEM;
- s = __kmem_cache_create(name, size, ARCH_KMALLOC_MINALIGN,
+ if (s) {
+ r = __kmem_cache_create(s, name, size, ARCH_KMALLOC_MINALIGN,
flags, NULL);
- if (s) {
- list_add(&s->list, &slab_caches);
- return s;
+ if (!r) {
+ list_add(&s->list, &slab_caches);
+ return s;
+ }
}
- panic("Creation of kmalloc slab %s size=%ld failed.\n", name, size);
+ panic("Creation of kmalloc slab %s size=%ld failed. Reason %d\n",
+ name, size, r);
return NULL;
}
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 14:21:24.000000000 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 14:23:29.056222811 -0500
@@ -508,34 +508,27 @@
}
EXPORT_SYMBOL(ksize);
-struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+int __kmem_cache_create(struct kmem_cache *c, const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *))
{
- struct kmem_cache *c;
-
- c = slob_alloc(sizeof(struct kmem_cache),
- GFP_KERNEL, ARCH_KMALLOC_MINALIGN, -1);
-
- if (c) {
- c->name = name;
- c->size = size;
- if (flags & SLAB_DESTROY_BY_RCU) {
- /* leave room for rcu footer at the end of object */
- c->size += sizeof(struct slob_rcu);
- }
- c->flags = flags;
- c->ctor = ctor;
- /* ignore alignment unless it's forced */
- c->align = (flags & SLAB_HWCACHE_ALIGN) ? SLOB_ALIGN : 0;
- if (c->align < ARCH_SLAB_MINALIGN)
- c->align = ARCH_SLAB_MINALIGN;
- if (c->align < align)
- c->align = align;
-
- kmemleak_alloc(c, sizeof(struct kmem_cache), 1, GFP_KERNEL);
- c->refcount = 1;
+ c->name = name;
+ c->size = size;
+ if (flags & SLAB_DESTROY_BY_RCU) {
+ /* leave room for rcu footer at the end of object */
+ c->size += sizeof(struct slob_rcu);
}
- return c;
+ c->flags = flags;
+ c->ctor = ctor;
+ /* ignore alignment unless it's forced */
+ c->align = (flags & SLAB_HWCACHE_ALIGN) ? SLOB_ALIGN : 0;
+ if (c->align < ARCH_SLAB_MINALIGN)
+ c->align = ARCH_SLAB_MINALIGN;
+ if (c->align < align)
+ c->align = align;
+
+ kmemleak_alloc(c, sizeof(struct kmem_cache), 1, GFP_KERNEL);
+ c->refcount = 1;
+ return 0;
}
void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:23:21.000000000 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:23:29.060222898 -0500
@@ -3027,7 +3027,6 @@
size_t align, unsigned long flags,
void (*ctor)(void *))
{
- memset(s, 0, kmem_size);
s->name = name;
s->ctor = ctor;
s->object_size = size;
@@ -3102,7 +3101,7 @@
goto error;
if (alloc_kmem_cache_cpus(s))
- return 1;
+ return 0;
free_kmem_cache_nodes(s);
error:
@@ -3111,7 +3110,7 @@
"order=%u offset=%u flags=%lx\n",
s->name, (unsigned long)size, s->size, oo_order(s->oo),
s->offset, flags);
- return 0;
+ return -EINVAL;
}
/*
@@ -3901,20 +3900,11 @@
return s;
}
-struct kmem_cache *__kmem_cache_create(const char *name, size_t size,
+int __kmem_cache_create(struct kmem_cache *s,
+ const char *name, size_t size,
size_t align, unsigned long flags, void (*ctor)(void *))
{
- struct kmem_cache *s;
-
- s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
- if (s) {
- if (kmem_cache_open(s, name,
- size, align, flags, ctor)) {
- return s;
- }
- kmem_cache_free(kmem_cache, s);
- }
- return NULL;
+ return kmem_cache_open(s, name, size, align, flags, ctor);
}
#ifdef CONFIG_SMP
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [17/19] Shrink __kmem_cache_create() parameter lists
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (15 preceding siblings ...)
2012-08-02 20:15 ` Common [16/19] Move kmem_cache allocations into common code Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [18/19] Move kmem_cache refcounting to common code Christoph Lameter
` (2 subsequent siblings)
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: reduce_parameters --]
[-- Type: text/plain, Size: 10628 bytes --]
Do the initial settings of the fields in common code. This will allow
us to push more processing into common code later and improve readability.
V1->V2:
- slab_common.c Remove #ifdef and use proper targeting for the goto.
- Remove superfluouis free of s->name.
- Remove the refcount move to slab_common (seems to be buggy)
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 15:00:08.631510888 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 15:00:11.179555425 -0500
@@ -33,8 +33,7 @@ extern struct list_head slab_caches;
extern struct kmem_cache *kmem_cache;
/* Functions provided by the slab allocators */
-extern int __kmem_cache_create(struct kmem_cache *, const char *name,
- size_t size, size_t align, unsigned long flags, void (*ctor)(void *));
+extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags);
extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
unsigned long flags);
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 15:00:08.631510888 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 15:00:11.179555425 -0500
@@ -54,7 +54,6 @@ struct kmem_cache *kmem_cache_create(con
{
struct kmem_cache *s;
int err = 0;
- char *n;
#ifdef CONFIG_DEBUG_VM
if (!name || in_interrupt() || size < sizeof(void *) ||
@@ -98,30 +97,30 @@ struct kmem_cache *kmem_cache_create(con
if (s)
goto out_locked;
- n = kstrdup(name, GFP_KERNEL);
- if (!n) {
- err = -ENOMEM;
- goto out_locked;
- }
-
s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
if (s) {
- err = __kmem_cache_create(s, n, size, align, flags, ctor);
+ s->object_size = s->size = size;
+ s->align = align;
+ s->ctor = ctor;
+ s->name = kstrdup(name, GFP_KERNEL);
+ if (!s->name) {
+ kmem_cache_free(kmem_cache, s);
+ err = -ENOMEM;
+ goto out_locked;
+ }
+ err = __kmem_cache_create(s, flags);
if (!err)
list_add(&s->list, &slab_caches);
else {
- kfree(n);
+ kfree(s->name);
kmem_cache_free(kmem_cache, s);
}
-
- } else {
- kfree(n);
+ } else
err = -ENOMEM;
- }
out_locked:
mutex_unlock(&slab_mutex);
@@ -189,8 +188,10 @@ struct kmem_cache *__init create_kmalloc
int r = -ENOMEM;
if (s) {
- r = __kmem_cache_create(s, name, size, ARCH_KMALLOC_MINALIGN,
- flags, NULL);
+ s->name = name;
+ s->size = s->object_size = size;
+ s->align = ARCH_KMALLOC_MINALIGN;
+ r = __kmem_cache_create(s, flags);
if (!r) {
list_add(&s->list, &slab_caches);
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 15:00:08.631510888 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 15:00:11.183555503 -0500
@@ -3022,16 +3022,9 @@ static int calculate_sizes(struct kmem_c
}
-static int kmem_cache_open(struct kmem_cache *s,
- const char *name, size_t size,
- size_t align, unsigned long flags,
- void (*ctor)(void *))
+static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
{
- s->name = name;
- s->ctor = ctor;
- s->object_size = size;
- s->align = align;
- s->flags = kmem_cache_flags(size, flags, name, ctor);
+ s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
s->reserved = 0;
if (need_reserve_slab_rcu && (s->flags & SLAB_DESTROY_BY_RCU))
@@ -3108,7 +3101,7 @@ error:
if (flags & SLAB_PANIC)
panic("Cannot create slab %s size=%lu realsize=%u "
"order=%u offset=%u flags=%lx\n",
- s->name, (unsigned long)size, s->size, oo_order(s->oo),
+ s->name, (unsigned long)s->size, s->size, oo_order(s->oo),
s->offset, flags);
return -EINVAL;
}
@@ -3249,8 +3242,11 @@ __setup("slub_nomerge", setup_slub_nomer
static int kmem_cache_open_boot(struct kmem_cache *s, const char *name, int size)
{
- return kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
- SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);
+ s->size = s->object_size = size;
+ s->name = name;
+ s->align = ARCH_KMALLOC_MINALIGN;
+ s->ctor = NULL;
+ return kmem_cache_open(s, SLAB_HWCACHE_ALIGN | SLAB_PANIC);
}
/*
@@ -3900,11 +3896,9 @@ struct kmem_cache *__kmem_cache_alias(co
return s;
}
-int __kmem_cache_create(struct kmem_cache *s,
- const char *name, size_t size,
- size_t align, unsigned long flags, void (*ctor)(void *))
+int __kmem_cache_create(struct kmem_cache *s, unsigned long flags)
{
- return kmem_cache_open(s, name, size, align, flags, ctor);
+ return kmem_cache_open(s, flags);
}
#ifdef CONFIG_SMP
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 15:00:08.631510888 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 15:00:11.183555503 -0500
@@ -2339,8 +2339,7 @@ static int __init_refok setup_cpu_cache(
* as davem.
*/
int
-__kmem_cache_create (struct kmem_cache *cachep, const char *name, size_t size, size_t align,
- unsigned long flags, void (*ctor)(void *))
+__kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
{
size_t left_over, slab_size, ralign;
gfp_t gfp;
@@ -2374,9 +2373,9 @@ __kmem_cache_create (struct kmem_cache *
* unaligned accesses for some archs when redzoning is used, and makes
* sure any on-slab bufctl's are also correctly aligned.
*/
- if (size & (BYTES_PER_WORD - 1)) {
- size += (BYTES_PER_WORD - 1);
- size &= ~(BYTES_PER_WORD - 1);
+ if (cachep->size & (BYTES_PER_WORD - 1)) {
+ cachep->size += (BYTES_PER_WORD - 1);
+ cachep->size &= ~(BYTES_PER_WORD - 1);
}
/* calculate the final buffer alignment: */
@@ -2389,7 +2388,7 @@ __kmem_cache_create (struct kmem_cache *
* one cacheline.
*/
ralign = cache_line_size();
- while (size <= ralign / 2)
+ while (cachep->size <= ralign / 2)
ralign /= 2;
} else {
ralign = BYTES_PER_WORD;
@@ -2407,8 +2406,8 @@ __kmem_cache_create (struct kmem_cache *
ralign = REDZONE_ALIGN;
/* If redzoning, ensure that the second redzone is suitably
* aligned, by adjusting the object size accordingly. */
- size += REDZONE_ALIGN - 1;
- size &= ~(REDZONE_ALIGN - 1);
+ cachep->size += REDZONE_ALIGN - 1;
+ cachep->size &= ~(REDZONE_ALIGN - 1);
}
/* 2) arch mandated alignment */
@@ -2416,8 +2415,8 @@ __kmem_cache_create (struct kmem_cache *
ralign = ARCH_SLAB_MINALIGN;
}
/* 3) caller mandated alignment */
- if (ralign < align) {
- ralign = align;
+ if (ralign < cachep->align) {
+ ralign = cachep->align;
}
/* disable debug if necessary */
if (ralign > __alignof__(unsigned long long))
@@ -2425,7 +2424,7 @@ __kmem_cache_create (struct kmem_cache *
/*
* 4) Store it.
*/
- align = ralign;
+ cachep->align = ralign;
if (slab_is_available())
gfp = GFP_KERNEL;
@@ -2433,8 +2432,6 @@ __kmem_cache_create (struct kmem_cache *
gfp = GFP_NOWAIT;
cachep->nodelists = (struct kmem_list3 **)&cachep->array[nr_cpu_ids];
- cachep->object_size = size;
- cachep->align = align;
#if DEBUG
/*
@@ -2471,7 +2468,7 @@ __kmem_cache_create (struct kmem_cache *
* it too early on. Always use on-slab management when
* SLAB_NOLEAKTRACE to avoid recursive calls into kmemleak)
*/
- if ((size >= (PAGE_SIZE >> 3)) && !slab_early_init &&
+ if ((cachep->size >= (PAGE_SIZE >> 3)) && !slab_early_init &&
!(flags & SLAB_NOLEAKTRACE))
/*
* Size is large, assume best to place the slab management obj
@@ -2479,17 +2476,15 @@ __kmem_cache_create (struct kmem_cache *
*/
flags |= CFLGS_OFF_SLAB;
- size = ALIGN(size, align);
+ cachep->size = ALIGN(cachep->size, cachep->align);
- left_over = calculate_slab_order(cachep, size, align, flags);
+ left_over = calculate_slab_order(cachep, cachep->size, cachep->align, flags);
- if (!cachep->num) {
- printk(KERN_ERR
- "kmem_cache_create: couldn't create cache %s.\n", name);
+ if (!cachep->num)
return -E2BIG;
- }
+
slab_size = ALIGN(cachep->num * sizeof(kmem_bufctl_t)
- + sizeof(struct slab), align);
+ + sizeof(struct slab), cachep->align);
/*
* If the slab has been placed off-slab, and we have enough space then
@@ -2510,23 +2505,22 @@ __kmem_cache_create (struct kmem_cache *
* poisoning, then it's going to smash the contents of
* the redzone and userword anyhow, so switch them off.
*/
- if (size % PAGE_SIZE == 0 && flags & SLAB_POISON)
+ if (cachep->size % PAGE_SIZE == 0 && flags & SLAB_POISON)
flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
#endif
}
cachep->colour_off = cache_line_size();
/* Offset must be a multiple of the alignment. */
- if (cachep->colour_off < align)
- cachep->colour_off = align;
+ if (cachep->colour_off < cachep->align)
+ cachep->colour_off = cachep->align;
cachep->colour = left_over / cachep->colour_off;
cachep->slab_size = slab_size;
cachep->flags = flags;
cachep->allocflags = 0;
if (CONFIG_ZONE_DMA_FLAG && (flags & SLAB_CACHE_DMA))
cachep->allocflags |= GFP_DMA;
- cachep->size = size;
- cachep->reciprocal_buffer_size = reciprocal_value(size);
+ cachep->reciprocal_buffer_size = reciprocal_value(cachep->size);
if (flags & CFLGS_OFF_SLAB) {
cachep->slabp_cache = kmem_find_general_cachep(slab_size, 0u);
@@ -2539,8 +2533,6 @@ __kmem_cache_create (struct kmem_cache *
*/
BUG_ON(ZERO_OR_NULL_PTR(cachep->slabp_cache));
}
- cachep->ctor = ctor;
- cachep->name = name;
cachep->refcount = 1;
err = setup_cpu_cache(cachep, gfp);
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 15:00:08.631510888 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 15:00:11.183555503 -0500
@@ -508,17 +508,15 @@ size_t ksize(const void *block)
}
EXPORT_SYMBOL(ksize);
-int __kmem_cache_create(struct kmem_cache *c, const char *name, size_t size,
- size_t align, unsigned long flags, void (*ctor)(void *))
+int __kmem_cache_create(struct kmem_cache *c, unsigned long flags)
{
- c->name = name;
- c->size = size;
+ size_t align = c->size;
+
if (flags & SLAB_DESTROY_BY_RCU) {
/* leave room for rcu footer at the end of object */
c->size += sizeof(struct slob_rcu);
}
c->flags = flags;
- c->ctor = ctor;
/* ignore alignment unless it's forced */
c->align = (flags & SLAB_HWCACHE_ALIGN) ? SLOB_ALIGN : 0;
if (c->align < ARCH_SLAB_MINALIGN)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [18/19] Move kmem_cache refcounting to common code
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (16 preceding siblings ...)
2012-08-02 20:15 ` Common [17/19] Shrink __kmem_cache_create() parameter lists Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-02 20:15 ` Common [19/19] Common alignment code Christoph Lameter
2012-08-03 13:09 ` Common [00/19] Sl[auo]b: Common code rework V9 Glauber Costa
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: refcount_move --]
[-- Type: text/plain, Size: 2109 bytes --]
Get rid of the refcount stuff in the allocators and do that
part of kmem_cache management in the common code.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 14:26:36.299565690 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 14:37:42.671454099 -0500
@@ -2533,7 +2533,6 @@
*/
BUG_ON(ZERO_OR_NULL_PTR(cachep->slabp_cache));
}
- cachep->refcount = 1;
r = setup_cpu_cache(cachep, gfp);
if (r) {
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 14:36:43.000000000 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 14:40:37.686580855 -0500
@@ -111,11 +111,12 @@
}
err = __kmem_cache_create(s, flags);
- if (!err)
+ if (!err) {
+ s->refcount = 1;
list_add(&s->list, &slab_caches);
- else {
+ } else {
kfree(s->name);
kmem_cache_free(kmem_cache, s);
}
@@ -195,6 +196,7 @@
if (!r) {
list_add(&s->list, &slab_caches);
+ s->refcount = 1;
return s;
}
}
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 14:26:36.000000000 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 14:37:42.671454099 -0500
@@ -524,8 +524,6 @@
if (c->align < align)
c->align = align;
- kmemleak_alloc(c, sizeof(struct kmem_cache), 1, GFP_KERNEL);
- c->refcount = 1;
return 0;
}
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 14:26:36.299565690 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 14:37:42.675454385 -0500
@@ -3086,7 +3086,6 @@
else
s->cpu_partial = 30;
- s->refcount = 1;
#ifdef CONFIG_NUMA
s->remote_node_defrag_ratio = 1000;
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Common [19/19] Common alignment code
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (17 preceding siblings ...)
2012-08-02 20:15 ` Common [18/19] Move kmem_cache refcounting to common code Christoph Lameter
@ 2012-08-02 20:15 ` Christoph Lameter
2012-08-03 13:09 ` Common [00/19] Sl[auo]b: Common code rework V9 Glauber Costa
19 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:15 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: common_alignment --]
[-- Type: text/plain, Size: 7606 bytes --]
Extract the code to do object alignment from the allocators.
Do the alignment calculations in slab_common so that the
__kmem_cache_create functions of the allocators do not have
to deal with alignment.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slab.c | 22 +---------------------
mm/slab.h | 3 +++
mm/slab_common.c | 30 +++++++++++++++++++++++++++++-
mm/slob.c | 11 -----------
mm/slub.c | 45 ++++++++-------------------------------------
5 files changed, 41 insertions(+), 70 deletions(-)
Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c 2012-08-02 15:00:51.432259050 -0500
+++ linux-2.6/mm/slab.c 2012-08-02 15:00:58.896389642 -0500
@@ -2378,22 +2378,6 @@ __kmem_cache_create (struct kmem_cache *
cachep->size &= ~(BYTES_PER_WORD - 1);
}
- /* calculate the final buffer alignment: */
-
- /* 1) arch recommendation: can be overridden for debug */
- if (flags & SLAB_HWCACHE_ALIGN) {
- /*
- * Default alignment: as specified by the arch code. Except if
- * an object is really small, then squeeze multiple objects into
- * one cacheline.
- */
- ralign = cache_line_size();
- while (cachep->size <= ralign / 2)
- ralign /= 2;
- } else {
- ralign = BYTES_PER_WORD;
- }
-
/*
* Redzoning and user store require word alignment or possibly larger.
* Note this will be overridden by architecture or caller mandated
@@ -2410,10 +2394,6 @@ __kmem_cache_create (struct kmem_cache *
cachep->size &= ~(REDZONE_ALIGN - 1);
}
- /* 2) arch mandated alignment */
- if (ralign < ARCH_SLAB_MINALIGN) {
- ralign = ARCH_SLAB_MINALIGN;
- }
/* 3) caller mandated alignment */
if (ralign < cachep->align) {
ralign = cachep->align;
Index: linux-2.6/mm/slab_common.c
===================================================================
--- linux-2.6.orig/mm/slab_common.c 2012-08-02 15:00:51.432259050 -0500
+++ linux-2.6/mm/slab_common.c 2012-08-02 15:00:58.896389642 -0500
@@ -25,6 +25,34 @@ DEFINE_MUTEX(slab_mutex);
struct kmem_cache *kmem_cache;
/*
+ * Figure out what the alignment of the objects will be given a set of
+ * flags, a user specified alignment and the size of the objects.
+ */
+unsigned long calculate_alignment(unsigned long flags,
+ unsigned long align, unsigned long size)
+{
+ /*
+ * If the user wants hardware cache aligned objects then follow that
+ * suggestion if the object is sufficiently large.
+ *
+ * The hardware cache alignment cannot override the specified
+ * alignment though. If that is greater then use it.
+ */
+ if (flags & SLAB_HWCACHE_ALIGN) {
+ unsigned long ralign = cache_line_size();
+ while (size <= ralign / 2)
+ ralign /= 2;
+ align = max(align, ralign);
+ }
+
+ if (align < ARCH_SLAB_MINALIGN)
+ align = ARCH_SLAB_MINALIGN;
+
+ return ALIGN(align, sizeof(void *));
+}
+
+
+/*
* kmem_cache_create - Create a cache.
* @name: A string which is used in /proc/slabinfo to identify this cache.
* @size: The size of objects to be created in this cache.
@@ -101,7 +129,7 @@ struct kmem_cache *kmem_cache_create(con
if (s) {
s->object_size = s->size = size;
- s->align = align;
+ s->align = calculate_alignment(flags, align, size);
s->ctor = ctor;
s->name = kstrdup(name, GFP_KERNEL);
if (!s->name) {
@@ -191,7 +219,7 @@ struct kmem_cache *__init create_kmalloc
if (s) {
s->name = name;
s->size = s->object_size = size;
- s->align = ARCH_KMALLOC_MINALIGN;
+ s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
r = __kmem_cache_create(s, flags);
if (!r) {
Index: linux-2.6/mm/slob.c
===================================================================
--- linux-2.6.orig/mm/slob.c 2012-08-02 15:00:51.432259050 -0500
+++ linux-2.6/mm/slob.c 2012-08-02 15:00:58.896389642 -0500
@@ -124,7 +124,6 @@ static inline void clear_slob_page_free(
#define SLOB_UNIT sizeof(slob_t)
#define SLOB_UNITS(size) (((size) + SLOB_UNIT - 1)/SLOB_UNIT)
-#define SLOB_ALIGN L1_CACHE_BYTES
/*
* struct slob_rcu is inserted at the tail of allocated slob blocks, which
@@ -510,20 +509,11 @@ EXPORT_SYMBOL(ksize);
int __kmem_cache_create(struct kmem_cache *c, unsigned long flags)
{
- size_t align = c->size;
-
if (flags & SLAB_DESTROY_BY_RCU) {
/* leave room for rcu footer at the end of object */
c->size += sizeof(struct slob_rcu);
}
c->flags = flags;
- /* ignore alignment unless it's forced */
- c->align = (flags & SLAB_HWCACHE_ALIGN) ? SLOB_ALIGN : 0;
- if (c->align < ARCH_SLAB_MINALIGN)
- c->align = ARCH_SLAB_MINALIGN;
- if (c->align < align)
- c->align = align;
-
return 0;
}
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-02 15:00:51.432259050 -0500
+++ linux-2.6/mm/slub.c 2012-08-02 15:00:58.896389642 -0500
@@ -2747,32 +2747,6 @@ static inline int calculate_order(int si
return -ENOSYS;
}
-/*
- * Figure out what the alignment of the objects will be.
- */
-static unsigned long calculate_alignment(unsigned long flags,
- unsigned long align, unsigned long size)
-{
- /*
- * If the user wants hardware cache aligned objects then follow that
- * suggestion if the object is sufficiently large.
- *
- * The hardware cache alignment cannot override the specified
- * alignment though. If that is greater then use it.
- */
- if (flags & SLAB_HWCACHE_ALIGN) {
- unsigned long ralign = cache_line_size();
- while (size <= ralign / 2)
- ralign /= 2;
- align = max(align, ralign);
- }
-
- if (align < ARCH_SLAB_MINALIGN)
- align = ARCH_SLAB_MINALIGN;
-
- return ALIGN(align, sizeof(void *));
-}
-
static void
init_kmem_cache_node(struct kmem_cache_node *n)
{
@@ -2906,7 +2880,6 @@ static int calculate_sizes(struct kmem_c
{
unsigned long flags = s->flags;
unsigned long size = s->object_size;
- unsigned long align = s->align;
int order;
/*
@@ -2978,19 +2951,11 @@ static int calculate_sizes(struct kmem_c
#endif
/*
- * Determine the alignment based on various parameters that the
- * user specified and the dynamic determination of cache line size
- * on bootup.
- */
- align = calculate_alignment(flags, align, s->object_size);
- s->align = align;
-
- /*
* SLUB stores one object immediately after another beginning from
* offset 0. In order to align the objects we have to simply size
* each object to conform to the alignment.
*/
- size = ALIGN(size, align);
+ size = ALIGN(size, s->align);
s->size = size;
if (forced_order >= 0)
order = forced_order;
@@ -3019,7 +2984,6 @@ static int calculate_sizes(struct kmem_c
s->max = s->oo;
return !!oo_objects(s->oo);
-
}
static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
Index: linux-2.6/mm/slab.h
===================================================================
--- linux-2.6.orig/mm/slab.h 2012-08-02 15:00:11.179555425 -0500
+++ linux-2.6/mm/slab.h 2012-08-02 15:00:58.896389642 -0500
@@ -32,6 +32,9 @@ extern struct list_head slab_caches;
/* The slab cache that manages slab cache information */
extern struct kmem_cache *kmem_cache;
+unsigned long calculate_alignment(unsigned long flags,
+ unsigned long align, unsigned long size);
+
/* Functions provided by the slab allocators */
extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free()
2012-08-02 20:15 ` Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
@ 2012-08-02 20:37 ` David Rientjes
2012-08-02 20:42 ` Christoph Lameter
0 siblings, 1 reply; 34+ messages in thread
From: David Rientjes @ 2012-08-02 20:37 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, Christoph Lameter wrote:
> Add additional debugging to check that the objects is actually from the cache
> the caller claims. Doing so currently trips up some other debugging code. It
> takes a lot to infer from that what was happening.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
Acked-by: David Rientjes <rientjes@google.com>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c 2012-08-02 13:52:35.314898373 -0500
> +++ linux-2.6/mm/slub.c 2012-08-02 13:52:38.662958767 -0500
> @@ -2607,6 +2607,13 @@
>
> page = virt_to_head_page(x);
>
> + if (kmem_cache_debug(s) && page->slab != s) {
> + printk("kmem_cache_free: Wrong slab cache. %s but object"
> + " is from %s\n", page->slab->name, s->name);
> + WARN_ON(1);
> + return;
> + }
> +
This could quickly spam the kernel log depending on how frequently objects
are being freed from the buggy callsite, should we disable further
debugging for the cache in situations like this?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free()
2012-08-02 20:37 ` David Rientjes
@ 2012-08-02 20:42 ` Christoph Lameter
2012-08-02 20:47 ` David Rientjes
0 siblings, 1 reply; 34+ messages in thread
From: Christoph Lameter @ 2012-08-02 20:42 UTC (permalink / raw)
To: David Rientjes; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, David Rientjes wrote:
> This could quickly spam the kernel log depending on how frequently objects
> are being freed from the buggy callsite, should we disable further
> debugging for the cache in situations like this?
This condition is pretty serious. The free action will be skipped
and we will be continually leaking memory. I think its best to keep on
logging this until someohne does something about the problem.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [03/19] Rename oops label
2012-08-02 20:15 ` Common [03/19] Rename oops label Christoph Lameter
@ 2012-08-02 20:45 ` David Rientjes
0 siblings, 0 replies; 34+ messages in thread
From: David Rientjes @ 2012-08-02 20:45 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, Christoph Lameter wrote:
> The label is actually used for successful exits so change the name.
>
This patch is on top of common slab code where this label has a single
reference and not for a successful exit until patch 11 in this series, so
perhaps change it in the patch where this is true?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free()
2012-08-02 20:42 ` Christoph Lameter
@ 2012-08-02 20:47 ` David Rientjes
2012-08-03 13:45 ` Christoph Lameter
0 siblings, 1 reply; 34+ messages in thread
From: David Rientjes @ 2012-08-02 20:47 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, Christoph Lameter wrote:
> This condition is pretty serious. The free action will be skipped
> and we will be continually leaking memory. I think its best to keep on
> logging this until someohne does something about the problem.
>
Dozens of lines will be emitted to the kernel log because a stack trace is
printed every time a bogus kmem_cache_free() is called, perhaps change the
WARN_ON(1) to at least a WARN_ON_ONCE(1)?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-02 20:15 ` Common [02/19] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
@ 2012-08-02 20:52 ` David Rientjes
2012-08-03 14:07 ` Christoph Lameter
2012-08-03 13:20 ` Glauber Costa
1 sibling, 1 reply; 34+ messages in thread
From: David Rientjes @ 2012-08-02 20:52 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, Christoph Lameter wrote:
> Do not use kmalloc() but kmem_cache_alloc() for the allocation
> of the kmem_cache structures in slub.
>
> This is the way its supposed to be. Recent merges lost
> the freeing of the kmem_cache structure and so this is also
> fixing memory leak on kmem_cache_destroy() by adding
> the missing free action to sysfs_slab_remove().
>
Nice catch of the memory leak!
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
Acked-by: David Rientjes <rientjes@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [04/19] Improve error handling in kmem_cache_create
2012-08-02 20:15 ` Common [04/19] Improve error handling in kmem_cache_create Christoph Lameter
@ 2012-08-02 20:58 ` David Rientjes
0 siblings, 0 replies; 34+ messages in thread
From: David Rientjes @ 2012-08-02 20:58 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, Christoph Lameter wrote:
> Instead of using s == NULL use an errorcode. This allows much more
> detailed diagnostics as to what went wrong. As we add more functionality
> from the slab allocators to the common kmem_cache_create() function we will
> also add more error conditions.
>
> Print the error code during the panic as well as in a warning if the module
> can handle failure. The API for kmem_cache_create() currently does not allow
> the returning of an error code. Return NULL but log the cause of the problem
> in the syslog.
>
I like how this also dumps the stack for any kmem_cache_create() that
fails.
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
Acked-by: David Rientjes <rientjes@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [05/19] Move list_add() to slab_common.c
2012-08-02 20:15 ` Common [05/19] Move list_add() to slab_common.c Christoph Lameter
@ 2012-08-02 21:18 ` David Rientjes
0 siblings, 0 replies; 34+ messages in thread
From: David Rientjes @ 2012-08-02 21:18 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, Christoph Lameter wrote:
> Move the code to append the new kmem_cache to the list of slab caches to
> the kmem_cache_create code in the shared code.
>
> This is possible now since the acquisition of the mutex was moved into
> kmem_cache_create().
>
> Reviewed-by: Glauber Costa <glommer@parallels.com>
> Reviewed-by: Joonsoo Kim <js1304@gmail.com>
> Signed-off-by: Christoph Lameter <cl@linux.com>
Acked-by: David Rientjes <rientjes@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [00/19] Sl[auo]b: Common code rework V9
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
` (18 preceding siblings ...)
2012-08-02 20:15 ` Common [19/19] Common alignment code Christoph Lameter
@ 2012-08-03 13:09 ` Glauber Costa
19 siblings, 0 replies; 34+ messages in thread
From: Glauber Costa @ 2012-08-03 13:09 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
On 08/03/2012 12:15 AM, Christoph Lameter wrote:
> Note that the first three patches are candidates
> for 3.5 since they fix certain things.
> The rest could go into -next once we are
> through with initial review.
>
> V8->V9:
> - Fix numerous things pointed out by Glauber.
Unfortunately you didn't, the issue I reported to you is still present.
In any case, I am happy that it helped you to sort out *other* issues =)
I took the time to debug this today, and already found the problem. I'll
comment in the relevant patch.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-02 20:15 ` Common [02/19] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
2012-08-02 20:52 ` David Rientjes
@ 2012-08-03 13:20 ` Glauber Costa
2012-08-03 13:52 ` Christoph Lameter
1 sibling, 1 reply; 34+ messages in thread
From: Glauber Costa @ 2012-08-03 13:20 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: Type: text/plain, Size: 2135 bytes --]
On 08/03/2012 12:15 AM, Christoph Lameter wrote:
> Do not use kmalloc() but kmem_cache_alloc() for the allocation
> of the kmem_cache structures in slub.
>
> This is the way its supposed to be. Recent merges lost
> the freeing of the kmem_cache structure and so this is also
> fixing memory leak on kmem_cache_destroy() by adding
> the missing free action to sysfs_slab_remove().
Okay. the problems I am seeing are due to to this patch.
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c 2012-08-01 13:02:18.897656578 -0500
> +++ linux-2.6/mm/slub.c 2012-08-01 13:06:02.673597753 -0500
> @@ -213,7 +213,7 @@
> static inline void sysfs_slab_remove(struct kmem_cache *s)
> {
> kfree(s->name);
> - kfree(s);
> + kmem_cache_free(kmem_cache, s);
> }
>
> #endif
> @@ -3962,7 +3962,7 @@
> if (!n)
> return NULL;
>
> - s = kmalloc(kmem_size, GFP_KERNEL);
> + s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
> if (s) {
> if (kmem_cache_open(s, n,
> size, align, flags, ctor)) {
> @@ -3979,7 +3979,7 @@
> list_del(&s->list);
> kmem_cache_close(s);
> }
> - kfree(s);
> + kmem_cache_free(kmem_cache, s);
> }
> kfree(n);
> return NULL;
> @@ -5217,7 +5217,7 @@
> struct kmem_cache *s = to_slab(kobj);
>
> kfree(s->name);
> - kfree(s);
> + kmem_cache_free(kmem_cache, s);
> }
>
> static const struct sysfs_ops slab_sysfs_ops = {
> @@ -5342,6 +5342,8 @@
> kobject_uevent(&s->kobj, KOBJ_REMOVE);
> kobject_del(&s->kobj);
> kobject_put(&s->kobj);
> + kfree(s->name);
> + kmem_cache_free(kmem_cache, s);
> }
>
> /*
>
When a non-alias cache is freed, both sysfs_slab_remove and
kmem_cache_release are called.
You are freeing structures on both, so you have two double frees.
slab_sysfs_remove() is the correct place for it, so you need to remove
them from kmem_cache_release(), which becomes an empty function.
Please consider replacing your patch with the attached. Replacing your
patch by this one makes my test case work after the series is applied.
[-- Attachment #2: cl-updated.patch --]
[-- Type: text/x-patch, Size: 1577 bytes --]
Index: linux-slab/mm/slub.c
===================================================================
--- linux-slab.orig/mm/slub.c
+++ linux-slab/mm/slub.c
@@ -211,7 +211,7 @@ static inline int sysfs_slab_alias(struc
static inline void sysfs_slab_remove(struct kmem_cache *s)
{
kfree(s->name);
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
#endif
@@ -3938,7 +3938,7 @@ struct kmem_cache *__kmem_cache_create(c
if (!n)
return NULL;
- s = kmalloc(kmem_size, GFP_KERNEL);
+ s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
if (s) {
if (kmem_cache_open(s, n,
size, align, flags, ctor)) {
@@ -3955,7 +3955,7 @@ struct kmem_cache *__kmem_cache_create(c
list_del(&s->list);
kmem_cache_close(s);
}
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
kfree(n);
return NULL;
@@ -5188,14 +5188,6 @@ static ssize_t slab_attr_store(struct ko
return err;
}
-static void kmem_cache_release(struct kobject *kobj)
-{
- struct kmem_cache *s = to_slab(kobj);
-
- kfree(s->name);
- kfree(s);
-}
-
static const struct sysfs_ops slab_sysfs_ops = {
.show = slab_attr_show,
.store = slab_attr_store,
@@ -5203,7 +5195,6 @@ static const struct sysfs_ops slab_sysfs
static struct kobj_type slab_ktype = {
.sysfs_ops = &slab_sysfs_ops,
- .release = kmem_cache_release
};
static int uevent_filter(struct kset *kset, struct kobject *kobj)
@@ -5318,6 +5309,8 @@ static void sysfs_slab_remove(struct kme
kobject_uevent(&s->kobj, KOBJ_REMOVE);
kobject_del(&s->kobj);
kobject_put(&s->kobj);
+ kfree(s->name);
+ kmem_cache_free(kmem_cache, s);
}
/*
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free()
2012-08-02 20:47 ` David Rientjes
@ 2012-08-03 13:45 ` Christoph Lameter
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-03 13:45 UTC (permalink / raw)
To: David Rientjes; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, David Rientjes wrote:
> On Thu, 2 Aug 2012, Christoph Lameter wrote:
>
> > This condition is pretty serious. The free action will be skipped
> > and we will be continually leaking memory. I think its best to keep on
> > logging this until someohne does something about the problem.
> >
>
> Dozens of lines will be emitted to the kernel log because a stack trace is
> printed every time a bogus kmem_cache_free() is called, perhaps change the
> WARN_ON(1) to at least a WARN_ON_ONCE(1)?
Ok,
Subject: Only warn once
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2012-08-03 08:43:34.772602802 -0500
+++ linux-2.6/mm/slub.c 2012-08-03 08:44:36.021655892 -0500
@@ -2610,7 +2610,7 @@ void kmem_cache_free(struct kmem_cache *
if (kmem_cache_debug(s) && page->slab != s) {
printk("kmem_cache_free: Wrong slab cache. %s but object"
" is from %s\n", page->slab->name, s->name);
- WARN_ON(1);
+ WARN_ON_ONCE(1);
return;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-03 13:20 ` Glauber Costa
@ 2012-08-03 13:52 ` Christoph Lameter
2012-08-03 13:53 ` Glauber Costa
0 siblings, 1 reply; 34+ messages in thread
From: Christoph Lameter @ 2012-08-03 13:52 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
[-- Attachment #1: Type: TEXT/PLAIN, Size: 735 bytes --]
On Fri, 3 Aug 2012, Glauber Costa wrote:
> When a non-alias cache is freed, both sysfs_slab_remove and
> kmem_cache_release are called.
>
> You are freeing structures on both, so you have two double frees.
>
> slab_sysfs_remove() is the correct place for it, so you need to remove
> them from kmem_cache_release(), which becomes an empty function.
So this is another bug in Linus's tree.
> Please consider replacing your patch with the attached. Replacing your
> patch by this one makes my test case work after the series is applied.
Ok. In the future please send a diff of only the changes you have made. We
have various tools that generate these diffs for you. I hope you are using
some version control system like quilt or git?
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: Type: TEXT/X-PATCH; NAME=cl-updated.patch, Size: 1640 bytes --]
Index: linux-slab/mm/slub.c
===================================================================
--- linux-slab.orig/mm/slub.c
+++ linux-slab/mm/slub.c
@@ -211,7 +211,7 @@ static inline int sysfs_slab_alias(struc
static inline void sysfs_slab_remove(struct kmem_cache *s)
{
kfree(s->name);
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
#endif
@@ -3938,7 +3938,7 @@ struct kmem_cache *__kmem_cache_create(c
if (!n)
return NULL;
- s = kmalloc(kmem_size, GFP_KERNEL);
+ s = kmem_cache_alloc(kmem_cache, GFP_KERNEL);
if (s) {
if (kmem_cache_open(s, n,
size, align, flags, ctor)) {
@@ -3955,7 +3955,7 @@ struct kmem_cache *__kmem_cache_create(c
list_del(&s->list);
kmem_cache_close(s);
}
- kfree(s);
+ kmem_cache_free(kmem_cache, s);
}
kfree(n);
return NULL;
@@ -5188,14 +5188,6 @@ static ssize_t slab_attr_store(struct ko
return err;
}
-static void kmem_cache_release(struct kobject *kobj)
-{
- struct kmem_cache *s = to_slab(kobj);
-
- kfree(s->name);
- kfree(s);
-}
-
static const struct sysfs_ops slab_sysfs_ops = {
.show = slab_attr_show,
.store = slab_attr_store,
@@ -5203,7 +5195,6 @@ static const struct sysfs_ops slab_sysfs
static struct kobj_type slab_ktype = {
.sysfs_ops = &slab_sysfs_ops,
- .release = kmem_cache_release
};
static int uevent_filter(struct kset *kset, struct kobject *kobj)
@@ -5318,6 +5309,8 @@ static void sysfs_slab_remove(struct kme
kobject_uevent(&s->kobj, KOBJ_REMOVE);
kobject_del(&s->kobj);
kobject_put(&s->kobj);
+ kfree(s->name);
+ kmem_cache_free(kmem_cache, s);
}
/*
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-03 13:52 ` Christoph Lameter
@ 2012-08-03 13:53 ` Glauber Costa
2012-08-03 14:02 ` Christoph Lameter
0 siblings, 1 reply; 34+ messages in thread
From: Glauber Costa @ 2012-08-03 13:53 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
On 08/03/2012 05:52 PM, Christoph Lameter wrote:
>> When a non-alias cache is freed, both sysfs_slab_remove and
>> > kmem_cache_release are called.
>> >
>> > You are freeing structures on both, so you have two double frees.
>> >
>> > slab_sysfs_remove() is the correct place for it, so you need to remove
>> > them from kmem_cache_release(), which becomes an empty function.
> So this is another bug in Linus's tree.
>
Indeed, but only when !SYSFS.
When we have sysfs on, sysfs_slab_remove actually did no freeing - as
you figured out yourself, so it was actually "correct".
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-03 13:53 ` Glauber Costa
@ 2012-08-03 14:02 ` Christoph Lameter
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-03 14:02 UTC (permalink / raw)
To: Glauber Costa; +Cc: Pekka Enberg, linux-mm, David Rientjes, Joonsoo Kim
On Fri, 3 Aug 2012, Glauber Costa wrote:
> On 08/03/2012 05:52 PM, Christoph Lameter wrote:
> >> When a non-alias cache is freed, both sysfs_slab_remove and
> >> > kmem_cache_release are called.
> >> >
> >> > You are freeing structures on both, so you have two double frees.
> >> >
> >> > slab_sysfs_remove() is the correct place for it, so you need to remove
> >> > them from kmem_cache_release(), which becomes an empty function.
> > So this is another bug in Linus's tree.
> >
>
> Indeed, but only when !SYSFS.
>
> When we have sysfs on, sysfs_slab_remove actually did no freeing - as
> you figured out yourself, so it was actually "correct".
Right so the correct solution is to leave it in kmem_cache_release() and
remove from sysfs_slab_remove. Basically dropping the last hunk of my
patch.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: Common [02/19] slub: Use kmem_cache for the kmem_cache structure
2012-08-02 20:52 ` David Rientjes
@ 2012-08-03 14:07 ` Christoph Lameter
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Lameter @ 2012-08-03 14:07 UTC (permalink / raw)
To: David Rientjes; +Cc: Glauber Costa, Pekka Enberg, linux-mm, Joonsoo Kim
On Thu, 2 Aug 2012, David Rientjes wrote:
> Nice catch of the memory leak!
No memory leak. kmem_cache was alrady released on kmem_cache_release() via
sysfs. This introduced the double free that Glauber saw.
Fixing this issue causes subsequent patches to also need some changes.
Sigh.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2012-08-03 14:37 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-02 20:15 Common [00/19] Sl[auo]b: Common code rework V9 Christoph Lameter
2012-08-02 20:15 ` Common [01/19] slub: Add debugging to verify correct cache use on kmem_cache_free() Christoph Lameter
2012-08-02 20:37 ` David Rientjes
2012-08-02 20:42 ` Christoph Lameter
2012-08-02 20:47 ` David Rientjes
2012-08-03 13:45 ` Christoph Lameter
2012-08-02 20:15 ` Common [02/19] slub: Use kmem_cache for the kmem_cache structure Christoph Lameter
2012-08-02 20:52 ` David Rientjes
2012-08-03 14:07 ` Christoph Lameter
2012-08-03 13:20 ` Glauber Costa
2012-08-03 13:52 ` Christoph Lameter
2012-08-03 13:53 ` Glauber Costa
2012-08-03 14:02 ` Christoph Lameter
2012-08-02 20:15 ` Common [03/19] Rename oops label Christoph Lameter
2012-08-02 20:45 ` David Rientjes
2012-08-02 20:15 ` Common [04/19] Improve error handling in kmem_cache_create Christoph Lameter
2012-08-02 20:58 ` David Rientjes
2012-08-02 20:15 ` Common [05/19] Move list_add() to slab_common.c Christoph Lameter
2012-08-02 21:18 ` David Rientjes
2012-08-02 20:15 ` Common [06/19] Extract a common function for kmem_cache_destroy Christoph Lameter
2012-08-02 20:15 ` Common [07/19] Always use the name "kmem_cache" for the slab cache with the kmem_cache structure Christoph Lameter
2012-08-02 20:15 ` Common [08/19] Move freeing of kmem_cache structure to common code Christoph Lameter
2012-08-02 20:15 ` Common [09/19] Get rid of __kmem_cache_destroy Christoph Lameter
2012-08-02 20:15 ` Common [10/19] Move duping of slab name to slab_common.c Christoph Lameter
2012-08-02 20:15 ` Common [11/19] Do slab aliasing call from common code Christoph Lameter
2012-08-02 20:15 ` Common [12/19] Move sysfs_slab_add to common Christoph Lameter
2012-08-02 20:15 ` Common [13/19] slub: Use a statically allocated kmem_cache boot structure for bootstrap Christoph Lameter
2012-08-02 20:15 ` Common [14/19] slub: Introduce function for opening boot caches Christoph Lameter
2012-08-02 20:15 ` Common [15/19] create common create_kmalloc_cache() Christoph Lameter
2012-08-02 20:15 ` Common [16/19] Move kmem_cache allocations into common code Christoph Lameter
2012-08-02 20:15 ` Common [17/19] Shrink __kmem_cache_create() parameter lists Christoph Lameter
2012-08-02 20:15 ` Common [18/19] Move kmem_cache refcounting to common code Christoph Lameter
2012-08-02 20:15 ` Common [19/19] Common alignment code Christoph Lameter
2012-08-03 13:09 ` Common [00/19] Sl[auo]b: Common code rework V9 Glauber Costa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).