* [patch 0/2] Remove destructor support from slab allocators
@ 2007-05-11 16:57 clameter
2007-05-11 16:57 ` [patch 1/2] From: Paul Mundt <lethal@linux-sh.org> clameter
2007-05-11 16:57 ` [patch 2/2] Slab allocators: Drop support for destructors clameter
0 siblings, 2 replies; 9+ messages in thread
From: clameter @ 2007-05-11 16:57 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel
This removes the last use of a slab destructor in the sh arch
and then removes destructor support.
--
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 1/2] From: Paul Mundt <lethal@linux-sh.org>
2007-05-11 16:57 [patch 0/2] Remove destructor support from slab allocators clameter
@ 2007-05-11 16:57 ` clameter
2007-05-11 18:39 ` Andrew Morton
2007-05-11 16:57 ` [patch 2/2] Slab allocators: Drop support for destructors clameter
1 sibling, 1 reply; 9+ messages in thread
From: clameter @ 2007-05-11 16:57 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel, Paul Mundt
[-- Attachment #1: drop_destruct --]
[-- Type: text/plain, Size: 3872 bytes --]
> I'll take a look at tidying up the PMB slab, getting rid of the dtor
> shouldn't be terribly painful. I simply opted to do the list management
> there since others were doing it for the PGD slab cache at the time that
> was written.
And here's the bit for dropping pmb_cache_dtor(), moving the list
management up to pmb_alloc() and pmb_free().
With this applied, we're all set for killing off slab destructors
from the kernel entirely.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
--
arch/sh/mm/pmb.c | 79 ++++++++++++++++++++++++++-----------------------------
1 file changed, 38 insertions(+), 41 deletions(-)
Index: linux-2.6.21-mm2/arch/sh/mm/pmb.c
===================================================================
--- linux-2.6.21-mm2.orig/arch/sh/mm/pmb.c 2007-05-11 09:20:59.000000000 -0700
+++ linux-2.6.21-mm2/arch/sh/mm/pmb.c 2007-05-11 09:22:28.000000000 -0700
@@ -3,7 +3,7 @@
*
* Privileged Space Mapping Buffer (PMB) Support.
*
- * Copyright (C) 2005, 2006 Paul Mundt
+ * Copyright (C) 2005, 2006, 2007 Paul Mundt
*
* P1/P2 Section mapping definitions from map32.h, which was:
*
@@ -68,6 +68,32 @@ static inline unsigned long mk_pmb_data(
return mk_pmb_entry(entry) | PMB_DATA;
}
+static DEFINE_SPINLOCK(pmb_list_lock);
+static struct pmb_entry *pmb_list;
+
+static inline void pmb_list_add(struct pmb_entry *pmbe)
+{
+ struct pmb_entry **p, *tmp;
+
+ p = &pmb_list;
+ while ((tmp = *p) != NULL)
+ p = &tmp->next;
+
+ pmbe->next = tmp;
+ *p = pmbe;
+}
+
+static inline void pmb_list_del(struct pmb_entry *pmbe)
+{
+ struct pmb_entry **p, *tmp;
+
+ for (p = &pmb_list; (tmp = *p); p = &tmp->next)
+ if (tmp == pmbe) {
+ *p = tmp->next;
+ return;
+ }
+}
+
struct pmb_entry *pmb_alloc(unsigned long vpn, unsigned long ppn,
unsigned long flags)
{
@@ -81,11 +107,19 @@ struct pmb_entry *pmb_alloc(unsigned lon
pmbe->ppn = ppn;
pmbe->flags = flags;
+ spin_lock_irq(&pmb_list_lock);
+ pmb_list_add(pmbe);
+ spin_unlock_irq(&pmb_list_lock);
+
return pmbe;
}
void pmb_free(struct pmb_entry *pmbe)
{
+ spin_lock_irq(&pmb_list_lock);
+ pmb_list_del(pmbe);
+ spin_unlock_irq(&pmb_list_lock);
+
kmem_cache_free(pmb_cache, pmbe);
}
@@ -167,31 +201,6 @@ void clear_pmb_entry(struct pmb_entry *p
clear_bit(entry, &pmb_map);
}
-static DEFINE_SPINLOCK(pmb_list_lock);
-static struct pmb_entry *pmb_list;
-
-static inline void pmb_list_add(struct pmb_entry *pmbe)
-{
- struct pmb_entry **p, *tmp;
-
- p = &pmb_list;
- while ((tmp = *p) != NULL)
- p = &tmp->next;
-
- pmbe->next = tmp;
- *p = pmbe;
-}
-
-static inline void pmb_list_del(struct pmb_entry *pmbe)
-{
- struct pmb_entry **p, *tmp;
-
- for (p = &pmb_list; (tmp = *p); p = &tmp->next)
- if (tmp == pmbe) {
- *p = tmp->next;
- return;
- }
-}
static struct {
unsigned long size;
@@ -283,25 +292,14 @@ void pmb_unmap(unsigned long addr)
} while (pmbe);
}
-static void pmb_cache_ctor(void *pmb, struct kmem_cache *cachep, unsigned long flags)
+static void pmb_cache_ctor(void *pmb, struct kmem_cache *cachep,
+ unsigned long flags)
{
struct pmb_entry *pmbe = pmb;
memset(pmb, 0, sizeof(struct pmb_entry));
- spin_lock_irq(&pmb_list_lock);
-
pmbe->entry = PMB_NO_ENTRY;
- pmb_list_add(pmbe);
-
- spin_unlock_irq(&pmb_list_lock);
-}
-
-static void pmb_cache_dtor(void *pmb, struct kmem_cache *cachep, unsigned long flags)
-{
- spin_lock_irq(&pmb_list_lock);
- pmb_list_del(pmb);
- spin_unlock_irq(&pmb_list_lock);
}
static int __init pmb_init(void)
@@ -312,8 +310,7 @@ static int __init pmb_init(void)
BUG_ON(unlikely(nr_entries >= NR_PMB_ENTRIES));
pmb_cache = kmem_cache_create("pmb", sizeof(struct pmb_entry), 0,
- SLAB_PANIC, pmb_cache_ctor,
- pmb_cache_dtor);
+ SLAB_PANIC, pmb_cache_ctor, NULL);
jump_to_P2();
--
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 2/2] Slab allocators: Drop support for destructors
2007-05-11 16:57 [patch 0/2] Remove destructor support from slab allocators clameter
2007-05-11 16:57 ` [patch 1/2] From: Paul Mundt <lethal@linux-sh.org> clameter
@ 2007-05-11 16:57 ` clameter
2007-05-11 17:55 ` Pekka Enberg
2007-05-12 1:33 ` Paul Mundt
1 sibling, 2 replies; 9+ messages in thread
From: clameter @ 2007-05-11 16:57 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel, Pekka Enberg, Paul Mundt
[-- Attachment #1: remdestructor --]
[-- Type: text/plain, Size: 10318 bytes --]
There is no user of destructors left. There is no reason why we should
keep checking for destructors calls in the slab allocators.
The RFC for this patch was discussed at
http://marc.info/?l=linux-kernel&m=117882364330705&w=2
Destructors were mainly used for list management which required them to take a
spinlock. Taking a spinlock in a destructor is a bit risky since the slab
allocators may run the destructors anytime they decide a slab is no longer
needed.
Patch drops destructor support. Any attempt to use a destructor will BUG().
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
include/linux/slub_def.h | 1 -
mm/slab.c | 27 ++-------------------------
mm/slob.c | 6 +-----
mm/slub.c | 46 +++++++++++++++-------------------------------
4 files changed, 18 insertions(+), 62 deletions(-)
Index: linux-2.6.21-mm2/include/linux/slub_def.h
===================================================================
--- linux-2.6.21-mm2.orig/include/linux/slub_def.h 2007-05-11 09:21:03.000000000 -0700
+++ linux-2.6.21-mm2/include/linux/slub_def.h 2007-05-11 09:22:31.000000000 -0700
@@ -40,7 +40,6 @@ struct kmem_cache {
int objects; /* Number of objects in slab */
int refcount; /* Refcount for slab cache destroy */
void (*ctor)(void *, struct kmem_cache *, unsigned long);
- void (*dtor)(void *, struct kmem_cache *, unsigned long);
int inuse; /* Offset to metadata */
int align; /* Alignment */
const char *name; /* Name (only for display!) */
Index: linux-2.6.21-mm2/mm/slab.c
===================================================================
--- linux-2.6.21-mm2.orig/mm/slab.c 2007-05-11 09:21:03.000000000 -0700
+++ linux-2.6.21-mm2/mm/slab.c 2007-05-11 09:22:31.000000000 -0700
@@ -409,9 +409,6 @@ struct kmem_cache {
/* constructor func */
void (*ctor) (void *, struct kmem_cache *, unsigned long);
- /* de-constructor func */
- void (*dtor) (void *, struct kmem_cache *, unsigned long);
-
/* 5) cache creation/removal */
const char *name;
struct list_head next;
@@ -1911,20 +1908,11 @@ static void slab_destroy_objs(struct kme
slab_error(cachep, "end of a freed object "
"was overwritten");
}
- if (cachep->dtor && !(cachep->flags & SLAB_POISON))
- (cachep->dtor) (objp + obj_offset(cachep), cachep, 0);
}
}
#else
static void slab_destroy_objs(struct kmem_cache *cachep, struct slab *slabp)
{
- if (cachep->dtor) {
- int i;
- for (i = 0; i < cachep->num; i++) {
- void *objp = index_to_obj(cachep, slabp, i);
- (cachep->dtor) (objp, cachep, 0);
- }
- }
}
#endif
@@ -2124,7 +2112,7 @@ static int setup_cpu_cache(struct kmem_c
* @align: The required alignment for the objects.
* @flags: SLAB flags
* @ctor: A constructor for the objects.
- * @dtor: A destructor for the objects.
+ * @dtor: A destructor for the objects (not implemented anymore).
*
* Returns a ptr to the cache on success, NULL on failure.
* Cannot be called within a int, but can be interrupted.
@@ -2159,7 +2147,7 @@ kmem_cache_create (const char *name, siz
* Sanity checks... these are all serious usage bugs.
*/
if (!name || in_interrupt() || (size < BYTES_PER_WORD) ||
- (size > (1 << MAX_OBJ_ORDER) * PAGE_SIZE) || (dtor && !ctor)) {
+ (size > (1 << MAX_OBJ_ORDER) * PAGE_SIZE) || dtor) {
printk(KERN_ERR "%s: Early error in slab %s\n", __FUNCTION__,
name);
BUG();
@@ -2213,9 +2201,6 @@ kmem_cache_create (const char *name, siz
if (flags & SLAB_DESTROY_BY_RCU)
BUG_ON(flags & SLAB_POISON);
#endif
- if (flags & SLAB_DESTROY_BY_RCU)
- BUG_ON(dtor);
-
/*
* Always checks flags, a caller might be expecting debug support which
* isn't available.
@@ -2370,7 +2355,6 @@ kmem_cache_create (const char *name, siz
BUG_ON(!cachep->slabp_cache);
}
cachep->ctor = ctor;
- cachep->dtor = dtor;
cachep->name = name;
if (setup_cpu_cache(cachep)) {
@@ -2835,7 +2819,6 @@ failed:
* Perform extra freeing checks:
* - detect bad pointers.
* - POISON/RED_ZONE checking
- * - destructor calls, for caches with POISON+dtor
*/
static void kfree_debugcheck(const void *objp)
{
@@ -2894,12 +2877,6 @@ static void *cache_free_debugcheck(struc
BUG_ON(objnr >= cachep->num);
BUG_ON(objp != index_to_obj(cachep, slabp, objnr));
- if (cachep->flags & SLAB_POISON && cachep->dtor) {
- /* we want to cache poison the object,
- * call the destruction callback
- */
- cachep->dtor(objp + obj_offset(cachep), cachep, 0);
- }
#ifdef CONFIG_DEBUG_SLAB_LEAK
slab_bufctl(slabp)[objnr] = BUFCTL_FREE;
#endif
Index: linux-2.6.21-mm2/mm/slob.c
===================================================================
--- linux-2.6.21-mm2.orig/mm/slob.c 2007-05-11 09:21:03.000000000 -0700
+++ linux-2.6.21-mm2/mm/slob.c 2007-05-11 09:22:31.000000000 -0700
@@ -268,7 +268,6 @@ struct kmem_cache {
unsigned int size, align;
const char *name;
void (*ctor)(void *, struct kmem_cache *, unsigned long);
- void (*dtor)(void *, struct kmem_cache *, unsigned long);
};
struct kmem_cache *kmem_cache_create(const char *name, size_t size,
@@ -278,13 +277,13 @@ struct kmem_cache *kmem_cache_create(con
{
struct kmem_cache *c;
+ BUG_ON(dtor);
c = slob_alloc(sizeof(struct kmem_cache), flags, 0);
if (c) {
c->name = name;
c->size = size;
c->ctor = ctor;
- c->dtor = dtor;
/* ignore alignment unless it's forced */
c->align = (flags & SLAB_HWCACHE_ALIGN) ? SLOB_ALIGN : 0;
if (c->align < align)
@@ -330,9 +329,6 @@ EXPORT_SYMBOL(kmem_cache_zalloc);
void kmem_cache_free(struct kmem_cache *c, void *b)
{
- if (c->dtor)
- c->dtor(b, c, 0);
-
if (c->size < PAGE_SIZE)
slob_free(b, c->size);
else
Index: linux-2.6.21-mm2/mm/slub.c
===================================================================
--- linux-2.6.21-mm2.orig/mm/slub.c 2007-05-11 09:21:03.000000000 -0700
+++ linux-2.6.21-mm2/mm/slub.c 2007-05-11 09:22:31.000000000 -0700
@@ -898,13 +898,13 @@ static void kmem_cache_open_debug_check(
* On 32 bit platforms the limit is 256k. On 64bit platforms
* the limit is 512k.
*
- * Debugging or ctor/dtors may create a need to move the free
+ * Debugging or ctor may create a need to move the free
* pointer. Fail if this happens.
*/
if (s->size >= 65535 * sizeof(void *)) {
BUG_ON(s->flags & (SLAB_RED_ZONE | SLAB_POISON |
SLAB_STORE_USER | SLAB_DESTROY_BY_RCU));
- BUG_ON(s->ctor || s->dtor);
+ BUG_ON(s->ctor);
}
else
/*
@@ -1037,15 +1037,12 @@ static void __free_slab(struct kmem_cach
{
int pages = 1 << s->order;
- if (unlikely(SlabDebug(page) || s->dtor)) {
+ if (unlikely(SlabDebug(page))) {
void *p;
slab_pad_check(s, page);
- for_each_object(p, s, page_address(page)) {
- if (s->dtor)
- s->dtor(p, s, 0);
+ for_each_object(p, s, page_address(page))
check_object(s, page, p, 0);
- }
}
mod_zone_page_state(page_zone(page),
@@ -1883,7 +1880,7 @@ static int calculate_sizes(struct kmem_c
* then we should never poison the object itself.
*/
if ((flags & SLAB_POISON) && !(flags & SLAB_DESTROY_BY_RCU) &&
- !s->ctor && !s->dtor)
+ !s->ctor)
s->flags |= __OBJECT_POISON;
else
s->flags &= ~__OBJECT_POISON;
@@ -1913,7 +1910,7 @@ static int calculate_sizes(struct kmem_c
#ifdef CONFIG_SLUB_DEBUG
if (((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) ||
- s->ctor || s->dtor)) {
+ s->ctor)) {
/*
* Relocate free pointer after the object if it is not
* permitted to overwrite the first word of the object on
@@ -1982,13 +1979,11 @@ static int calculate_sizes(struct kmem_c
static int kmem_cache_open(struct kmem_cache *s, gfp_t gfpflags,
const char *name, size_t size,
size_t align, unsigned long flags,
- void (*ctor)(void *, struct kmem_cache *, unsigned long),
- void (*dtor)(void *, struct kmem_cache *, unsigned long))
+ void (*ctor)(void *, struct kmem_cache *, unsigned long))
{
memset(s, 0, kmem_size);
s->name = name;
s->ctor = ctor;
- s->dtor = dtor;
s->objsize = size;
s->flags = flags;
s->align = align;
@@ -2173,7 +2168,7 @@ static struct kmem_cache *create_kmalloc
down_write(&slub_lock);
if (!kmem_cache_open(s, gfp_flags, name, size, ARCH_KMALLOC_MINALIGN,
- flags, NULL, NULL))
+ flags, NULL))
goto panic;
list_add(&s->list, &slab_caches);
@@ -2485,7 +2480,7 @@ static int slab_unmergeable(struct kmem_
if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE))
return 1;
- if (s->ctor || s->dtor)
+ if (s->ctor)
return 1;
return 0;
@@ -2493,15 +2488,14 @@ static int slab_unmergeable(struct kmem_
static struct kmem_cache *find_mergeable(size_t size,
size_t align, unsigned long flags,
- void (*ctor)(void *, struct kmem_cache *, unsigned long),
- void (*dtor)(void *, struct kmem_cache *, unsigned long))
+ void (*ctor)(void *, struct kmem_cache *, unsigned long))
{
struct list_head *h;
if (slub_nomerge || (flags & SLUB_NEVER_MERGE))
return NULL;
- if (ctor || dtor)
+ if (ctor)
return NULL;
size = ALIGN(size, sizeof(void *));
@@ -2543,8 +2537,10 @@ struct kmem_cache *kmem_cache_create(con
{
struct kmem_cache *s;
+ BUG_ON(dtor);
+
down_write(&slub_lock);
- s = find_mergeable(size, align, flags, dtor, ctor);
+ s = find_mergeable(size, align, flags, ctor);
if (s) {
s->refcount++;
/*
@@ -2558,7 +2554,7 @@ struct kmem_cache *kmem_cache_create(con
} else {
s = kmalloc(kmem_size, GFP_KERNEL);
if (s && kmem_cache_open(s, GFP_KERNEL, name,
- size, align, flags, ctor, dtor)) {
+ size, align, flags, ctor)) {
if (sysfs_slab_add(s)) {
kfree(s);
goto err;
@@ -3199,17 +3195,6 @@ static ssize_t ctor_show(struct kmem_cac
}
SLAB_ATTR_RO(ctor);
-static ssize_t dtor_show(struct kmem_cache *s, char *buf)
-{
- if (s->dtor) {
- int n = sprint_symbol(buf, (unsigned long)s->dtor);
-
- return n + sprintf(buf + n, "\n");
- }
- return 0;
-}
-SLAB_ATTR_RO(dtor);
-
static ssize_t aliases_show(struct kmem_cache *s, char *buf)
{
return sprintf(buf, "%d\n", s->refcount - 1);
@@ -3441,7 +3426,6 @@ static struct attribute * slab_attrs[] =
&partial_attr.attr,
&cpu_slabs_attr.attr,
&ctor_attr.attr,
- &dtor_attr.attr,
&aliases_attr.attr,
&align_attr.attr,
&sanity_checks_attr.attr,
--
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 2/2] Slab allocators: Drop support for destructors
2007-05-11 16:57 ` [patch 2/2] Slab allocators: Drop support for destructors clameter
@ 2007-05-11 17:55 ` Pekka Enberg
2007-05-12 1:33 ` Paul Mundt
1 sibling, 0 replies; 9+ messages in thread
From: Pekka Enberg @ 2007-05-11 17:55 UTC (permalink / raw)
To: clameter; +Cc: akpm, linux-kernel, Paul Mundt
clameter@sgi.com wrote:
> There is no user of destructors left. There is no reason why we should
> keep checking for destructors calls in the slab allocators.
Looks good to me.
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 1/2] From: Paul Mundt <lethal@linux-sh.org>
2007-05-11 16:57 ` [patch 1/2] From: Paul Mundt <lethal@linux-sh.org> clameter
@ 2007-05-11 18:39 ` Andrew Morton
2007-05-12 0:22 ` Christoph Lameter
2007-05-12 1:33 ` Paul Mundt
0 siblings, 2 replies; 9+ messages in thread
From: Andrew Morton @ 2007-05-11 18:39 UTC (permalink / raw)
To: clameter; +Cc: linux-kernel, Paul Mundt
On Fri, 11 May 2007 09:57:50 -0700
clameter@sgi.com wrote:
> > I'll take a look at tidying up the PMB slab, getting rid of the dtor
> > shouldn't be terribly painful. I simply opted to do the list management
> > there since others were doing it for the PGD slab cache at the time that
> > was written.
>
> And here's the bit for dropping pmb_cache_dtor(), moving the list
> management up to pmb_alloc() and pmb_free().
>
> With this applied, we're all set for killing off slab destructors
> from the kernel entirely.
hm, this is already in Paul's git tree.
If we're going to slam all this into 2.6.22 then I can just tempdrop Paul's
tree.
However I think we've done enough slab work for 2.6.22 now so I'm inclined
to queue these changes for 2.6.23. That would mean that the slab changes in
-mm have a dependency on the sh git tree which I am sure to forget about.
If I end up merging these changes before Paul merges his tree, sh will
break. Presumably Paul will notice this ;)
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 1/2] From: Paul Mundt <lethal@linux-sh.org>
2007-05-11 18:39 ` Andrew Morton
@ 2007-05-12 0:22 ` Christoph Lameter
2007-05-12 1:33 ` Paul Mundt
1 sibling, 0 replies; 9+ messages in thread
From: Christoph Lameter @ 2007-05-12 0:22 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, Paul Mundt
On Fri, 11 May 2007, Andrew Morton wrote:
> However I think we've done enough slab work for 2.6.22 now so I'm inclined
> to queue these changes for 2.6.23. That would mean that the slab changes in
> -mm have a dependency on the sh git tree which I am sure to forget about.
> If I end up merging these changes before Paul merges his tree, sh will
> break. Presumably Paul will notice this ;)
Ok. Only mm is fine for what I have planned. I want to add a
kmem_cache_ops structure for 2.6.23. Maybe I can use the now useless dtor
field of kmem_cache_create for this?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 1/2] From: Paul Mundt <lethal@linux-sh.org>
2007-05-11 18:39 ` Andrew Morton
2007-05-12 0:22 ` Christoph Lameter
@ 2007-05-12 1:33 ` Paul Mundt
2007-05-12 1:41 ` Andrew Morton
1 sibling, 1 reply; 9+ messages in thread
From: Paul Mundt @ 2007-05-12 1:33 UTC (permalink / raw)
To: Andrew Morton; +Cc: clameter, linux-kernel
On Fri, May 11, 2007 at 11:39:15AM -0700, Andrew Morton wrote:
> On Fri, 11 May 2007 09:57:50 -0700
> clameter@sgi.com wrote:
>
> > > I'll take a look at tidying up the PMB slab, getting rid of the dtor
> > > shouldn't be terribly painful. I simply opted to do the list management
> > > there since others were doing it for the PGD slab cache at the time that
> > > was written.
> >
> > And here's the bit for dropping pmb_cache_dtor(), moving the list
> > management up to pmb_alloc() and pmb_free().
> >
> > With this applied, we're all set for killing off slab destructors
> > from the kernel entirely.
>
> hm, this is already in Paul's git tree.
>
> If we're going to slam all this into 2.6.22 then I can just tempdrop Paul's
> tree.
>
> However I think we've done enough slab work for 2.6.22 now so I'm inclined
> to queue these changes for 2.6.23. That would mean that the slab changes in
> -mm have a dependency on the sh git tree which I am sure to forget about.
> If I end up merging these changes before Paul merges his tree, sh will
> break. Presumably Paul will notice this ;)
I can prune it from my tree if you'd rather just bundle these together, I
wasn't sure what the timeline for these changes were, so I opted just to
toss the PMB rework in my git tree ahead of time.
On the other hand, if Christoph's changes are going to be queued for
2.6.23, the PMB changes will trickle in well before then anyways.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 2/2] Slab allocators: Drop support for destructors
2007-05-11 16:57 ` [patch 2/2] Slab allocators: Drop support for destructors clameter
2007-05-11 17:55 ` Pekka Enberg
@ 2007-05-12 1:33 ` Paul Mundt
1 sibling, 0 replies; 9+ messages in thread
From: Paul Mundt @ 2007-05-12 1:33 UTC (permalink / raw)
To: clameter; +Cc: akpm, linux-kernel, Pekka Enberg
On Fri, May 11, 2007 at 09:57:51AM -0700, clameter@sgi.com wrote:
> There is no user of destructors left. There is no reason why we should
> keep checking for destructors calls in the slab allocators.
>
> The RFC for this patch was discussed at
> http://marc.info/?l=linux-kernel&m=117882364330705&w=2
>
> Destructors were mainly used for list management which required them to take a
> spinlock. Taking a spinlock in a destructor is a bit risky since the slab
> allocators may run the destructors anytime they decide a slab is no longer
> needed.
>
> Patch drops destructor support. Any attempt to use a destructor will BUG().
>
> Cc: Pekka Enberg <penberg@cs.helsinki.fi>
> Cc: Paul Mundt <lethal@linux-sh.org>
> Signed-off-by: Christoph Lameter <clameter@sgi.com>
>
Acked-by: Paul Mundt <lethal@linux-sh.org>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 1/2] From: Paul Mundt <lethal@linux-sh.org>
2007-05-12 1:33 ` Paul Mundt
@ 2007-05-12 1:41 ` Andrew Morton
0 siblings, 0 replies; 9+ messages in thread
From: Andrew Morton @ 2007-05-12 1:41 UTC (permalink / raw)
To: Paul Mundt; +Cc: clameter, linux-kernel
On Sat, 12 May 2007 10:33:00 +0900
Paul Mundt <lethal@linux-sh.org> wrote:
> On Fri, May 11, 2007 at 11:39:15AM -0700, Andrew Morton wrote:
> > On Fri, 11 May 2007 09:57:50 -0700
> > clameter@sgi.com wrote:
> >
> > > > I'll take a look at tidying up the PMB slab, getting rid of the dtor
> > > > shouldn't be terribly painful. I simply opted to do the list management
> > > > there since others were doing it for the PGD slab cache at the time that
> > > > was written.
> > >
> > > And here's the bit for dropping pmb_cache_dtor(), moving the list
> > > management up to pmb_alloc() and pmb_free().
> > >
> > > With this applied, we're all set for killing off slab destructors
> > > from the kernel entirely.
> >
> > hm, this is already in Paul's git tree.
> >
> > If we're going to slam all this into 2.6.22 then I can just tempdrop Paul's
> > tree.
> >
> > However I think we've done enough slab work for 2.6.22 now so I'm inclined
> > to queue these changes for 2.6.23. That would mean that the slab changes in
> > -mm have a dependency on the sh git tree which I am sure to forget about.
> > If I end up merging these changes before Paul merges his tree, sh will
> > break. Presumably Paul will notice this ;)
>
> I can prune it from my tree if you'd rather just bundle these together, I
> wasn't sure what the timeline for these changes were, so I opted just to
> toss the PMB rework in my git tree ahead of time.
>
> On the other hand, if Christoph's changes are going to be queued for
> 2.6.23, the PMB changes will trickle in well before then anyways.
It looks like we'll be going the latter trickle-in way, thanks.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2007-05-12 1:41 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-11 16:57 [patch 0/2] Remove destructor support from slab allocators clameter
2007-05-11 16:57 ` [patch 1/2] From: Paul Mundt <lethal@linux-sh.org> clameter
2007-05-11 18:39 ` Andrew Morton
2007-05-12 0:22 ` Christoph Lameter
2007-05-12 1:33 ` Paul Mundt
2007-05-12 1:41 ` Andrew Morton
2007-05-11 16:57 ` [patch 2/2] Slab allocators: Drop support for destructors clameter
2007-05-11 17:55 ` Pekka Enberg
2007-05-12 1:33 ` Paul Mundt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox