* slub: [RFC] free slabs without holding locks.
@ 2011-06-20 21:16 Christoph Lameter
2011-07-07 19:04 ` Pekka Enberg
0 siblings, 1 reply; 7+ messages in thread
From: Christoph Lameter @ 2011-06-20 21:16 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm
Just saw the slab lockdep problem. We can free from slub without holding
any locks. I guess something similar can be done for slab but it would be
more complicated given the nesting level of free_block(). Not sure if this
brings us anything but it does not look like this is doing anything
negative to the performance of the allocator.
Subject: slub: free slabs without holding locks.
There are two situations in which slub holds a lock while releasing
pages:
A. During kmem_cache_shrink()
B. During kmem_cache_close()
For both situations build a list while holding the lock and then
release the pages later. Both functions are not performance critical.
After this patch all invocations of free operations are done without
holding any locks.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 49 +++++++++++++++++++++++++------------------------
1 file changed, 25 insertions(+), 24 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-06-20 15:23:38.000000000 -0500
+++ linux-2.6/mm/slub.c 2011-06-20 16:11:44.572587454 -0500
@@ -2657,18 +2657,22 @@ static void free_partial(struct kmem_cac
{
unsigned long flags;
struct page *page, *h;
+ LIST_HEAD(empty);
spin_lock_irqsave(&n->list_lock, flags);
list_for_each_entry_safe(page, h, &n->partial, lru) {
- if (!page->inuse) {
- __remove_partial(n, page);
- discard_slab(s, page);
- } else {
- list_slab_objects(s, page,
- "Objects remaining on kmem_cache_close()");
- }
+ if (!page->inuse)
+ list_move(&page->lru, &empty);
}
spin_unlock_irqrestore(&n->list_lock, flags);
+
+ list_for_each_entry_safe(page, h, &empty, lru)
+ discard_slab(s, page);
+
+ if (!list_empty(&n->partial))
+ list_for_each_entry(page, &n->partial, lru)
+ list_slab_objects(s, page,
+ "Objects remaining on kmem_cache_close()");
}
/*
@@ -2702,6 +2706,9 @@ void kmem_cache_destroy(struct kmem_cach
s->refcount--;
if (!s->refcount) {
list_del(&s->list);
+ sysfs_slab_remove(s);
+ up_write(&slub_lock);
+
if (kmem_cache_close(s)) {
printk(KERN_ERR "SLUB %s: %s called for cache that "
"still has objects.\n", s->name, __func__);
@@ -2709,9 +2716,9 @@ void kmem_cache_destroy(struct kmem_cach
}
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
- sysfs_slab_remove(s);
- }
- up_write(&slub_lock);
+ kfree(s);
+ } else
+ up_write(&slub_lock);
}
EXPORT_SYMBOL(kmem_cache_destroy);
@@ -2993,29 +3000,23 @@ int kmem_cache_shrink(struct kmem_cache
* list_lock. page->inuse here is the upper limit.
*/
list_for_each_entry_safe(page, t, &n->partial, lru) {
- if (!page->inuse && slab_trylock(page)) {
- /*
- * Must hold slab lock here because slab_free
- * may have freed the last object and be
- * waiting to release the slab.
- */
- __remove_partial(n, page);
- slab_unlock(page);
- discard_slab(s, page);
- } else {
- list_move(&page->lru,
- slabs_by_inuse + page->inuse);
- }
+ list_move(&page->lru, slabs_by_inuse + page->inuse);
+ if (!page->inuse)
+ n->nr_partial--;
}
/*
* Rebuild the partial list with the slabs filled up most
* first and the least used slabs at the end.
*/
- for (i = objects - 1; i >= 0; i--)
+ for (i = objects - 1; i > 0; i--)
list_splice(slabs_by_inuse + i, n->partial.prev);
spin_unlock_irqrestore(&n->list_lock, flags);
+
+ /* Release empty slabs */
+ list_for_each_entry_safe(page, t, slabs_by_inuse, lru)
+ discard_slab(s, page);
}
kfree(slabs_by_inuse);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: slub: [RFC] free slabs without holding locks.
2011-06-20 21:16 slub: [RFC] free slabs without holding locks Christoph Lameter
@ 2011-07-07 19:04 ` Pekka Enberg
2011-07-14 0:25 ` David Rientjes
0 siblings, 1 reply; 7+ messages in thread
From: Pekka Enberg @ 2011-07-07 19:04 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm, rientjes
On Mon, 2011-06-20 at 16:16 -0500, Christoph Lameter wrote:
> Just saw the slab lockdep problem. We can free from slub without holding
> any locks. I guess something similar can be done for slab but it would be
> more complicated given the nesting level of free_block(). Not sure if this
> brings us anything but it does not look like this is doing anything
> negative to the performance of the allocator.
>
>
>
> Subject: slub: free slabs without holding locks.
>
> There are two situations in which slub holds a lock while releasing
> pages:
>
> A. During kmem_cache_shrink()
> B. During kmem_cache_close()
>
> For both situations build a list while holding the lock and then
> release the pages later. Both functions are not performance critical.
>
> After this patch all invocations of free operations are done without
> holding any locks.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
Seems reasonable. David, would you mind taking a look at this?
>
> ---
> mm/slub.c | 49 +++++++++++++++++++++++++------------------------
> 1 file changed, 25 insertions(+), 24 deletions(-)
>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c 2011-06-20 15:23:38.000000000 -0500
> +++ linux-2.6/mm/slub.c 2011-06-20 16:11:44.572587454 -0500
> @@ -2657,18 +2657,22 @@ static void free_partial(struct kmem_cac
> {
> unsigned long flags;
> struct page *page, *h;
> + LIST_HEAD(empty);
>
> spin_lock_irqsave(&n->list_lock, flags);
> list_for_each_entry_safe(page, h, &n->partial, lru) {
> - if (!page->inuse) {
> - __remove_partial(n, page);
> - discard_slab(s, page);
> - } else {
> - list_slab_objects(s, page,
> - "Objects remaining on kmem_cache_close()");
> - }
> + if (!page->inuse)
> + list_move(&page->lru, &empty);
> }
> spin_unlock_irqrestore(&n->list_lock, flags);
> +
> + list_for_each_entry_safe(page, h, &empty, lru)
> + discard_slab(s, page);
> +
> + if (!list_empty(&n->partial))
> + list_for_each_entry(page, &n->partial, lru)
> + list_slab_objects(s, page,
> + "Objects remaining on kmem_cache_close()");
> }
>
> /*
> @@ -2702,6 +2706,9 @@ void kmem_cache_destroy(struct kmem_cach
> s->refcount--;
> if (!s->refcount) {
> list_del(&s->list);
> + sysfs_slab_remove(s);
> + up_write(&slub_lock);
> +
> if (kmem_cache_close(s)) {
> printk(KERN_ERR "SLUB %s: %s called for cache that "
> "still has objects.\n", s->name, __func__);
> @@ -2709,9 +2716,9 @@ void kmem_cache_destroy(struct kmem_cach
> }
> if (s->flags & SLAB_DESTROY_BY_RCU)
> rcu_barrier();
> - sysfs_slab_remove(s);
> - }
> - up_write(&slub_lock);
> + kfree(s);
> + } else
> + up_write(&slub_lock);
> }
> EXPORT_SYMBOL(kmem_cache_destroy);
>
> @@ -2993,29 +3000,23 @@ int kmem_cache_shrink(struct kmem_cache
> * list_lock. page->inuse here is the upper limit.
> */
> list_for_each_entry_safe(page, t, &n->partial, lru) {
> - if (!page->inuse && slab_trylock(page)) {
> - /*
> - * Must hold slab lock here because slab_free
> - * may have freed the last object and be
> - * waiting to release the slab.
> - */
> - __remove_partial(n, page);
> - slab_unlock(page);
> - discard_slab(s, page);
> - } else {
> - list_move(&page->lru,
> - slabs_by_inuse + page->inuse);
> - }
> + list_move(&page->lru, slabs_by_inuse + page->inuse);
> + if (!page->inuse)
> + n->nr_partial--;
> }
>
> /*
> * Rebuild the partial list with the slabs filled up most
> * first and the least used slabs at the end.
> */
> - for (i = objects - 1; i >= 0; i--)
> + for (i = objects - 1; i > 0; i--)
What's this hunk about?
> list_splice(slabs_by_inuse + i, n->partial.prev);
>
> spin_unlock_irqrestore(&n->list_lock, flags);
> +
> + /* Release empty slabs */
> + list_for_each_entry_safe(page, t, slabs_by_inuse, lru)
> + discard_slab(s, page);
> }
>
> kfree(slabs_by_inuse);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: slub: [RFC] free slabs without holding locks.
2011-07-07 19:04 ` Pekka Enberg
@ 2011-07-14 0:25 ` David Rientjes
2011-07-14 14:20 ` Christoph Lameter
0 siblings, 1 reply; 7+ messages in thread
From: David Rientjes @ 2011-07-14 0:25 UTC (permalink / raw)
To: Pekka Enberg; +Cc: Christoph Lameter, linux-mm
On Thu, 7 Jul 2011, Pekka Enberg wrote:
> > Just saw the slab lockdep problem.
Is the lockdep output available for inclusion in the changelog?
> > We can free from slub without holding
> > any locks. I guess something similar can be done for slab but it would be
> > more complicated given the nesting level of free_block(). Not sure if this
> > brings us anything but it does not look like this is doing anything
> > negative to the performance of the allocator.
> >
> >
> >
> > Subject: slub: free slabs without holding locks.
> >
> > There are two situations in which slub holds a lock while releasing
> > pages:
> >
> > A. During kmem_cache_shrink()
> > B. During kmem_cache_close()
> >
> > For both situations build a list while holding the lock and then
> > release the pages later. Both functions are not performance critical.
> >
> > After this patch all invocations of free operations are done without
> > holding any locks.
> >
> > Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Seems reasonable. David, would you mind taking a look at this?
>
Sorry for the delay!
> >
> > ---
> > mm/slub.c | 49 +++++++++++++++++++++++++------------------------
> > 1 file changed, 25 insertions(+), 24 deletions(-)
> >
> > Index: linux-2.6/mm/slub.c
> > ===================================================================
> > --- linux-2.6.orig/mm/slub.c 2011-06-20 15:23:38.000000000 -0500
> > +++ linux-2.6/mm/slub.c 2011-06-20 16:11:44.572587454 -0500
> > @@ -2657,18 +2657,22 @@ static void free_partial(struct kmem_cac
> > {
> > unsigned long flags;
> > struct page *page, *h;
> > + LIST_HEAD(empty);
> >
> > spin_lock_irqsave(&n->list_lock, flags);
> > list_for_each_entry_safe(page, h, &n->partial, lru) {
> > - if (!page->inuse) {
> > - __remove_partial(n, page);
> > - discard_slab(s, page);
> > - } else {
> > - list_slab_objects(s, page,
> > - "Objects remaining on kmem_cache_close()");
> > - }
> > + if (!page->inuse)
> > + list_move(&page->lru, &empty);
> > }
> > spin_unlock_irqrestore(&n->list_lock, flags);
> > +
> > + list_for_each_entry_safe(page, h, &empty, lru)
> > + discard_slab(s, page);
> > +
> > + if (!list_empty(&n->partial))
> > + list_for_each_entry(page, &n->partial, lru)
> > + list_slab_objects(s, page,
> > + "Objects remaining on kmem_cache_close()");
> > }
> >
> > /*
The last iteration to check for any pages remaining on the partial list is
not safe because partial list manipulation is protected by list_lock.
That needs to be fixed by testing for page->inuse during the iteration
while still holding the lock and dropping the later iteration all
together.
> > @@ -2702,6 +2706,9 @@ void kmem_cache_destroy(struct kmem_cach
> > s->refcount--;
> > if (!s->refcount) {
> > list_del(&s->list);
> > + sysfs_slab_remove(s);
> > + up_write(&slub_lock);
> > +
> > if (kmem_cache_close(s)) {
> > printk(KERN_ERR "SLUB %s: %s called for cache that "
> > "still has objects.\n", s->name, __func__);
> > @@ -2709,9 +2716,9 @@ void kmem_cache_destroy(struct kmem_cach
> > }
> > if (s->flags & SLAB_DESTROY_BY_RCU)
> > rcu_barrier();
> > - sysfs_slab_remove(s);
> > - }
> > - up_write(&slub_lock);
> > + kfree(s);
Why the new kfree() here? If the refcount is 0, then this should be
handled when the sysfs entry is released regardless of whether
sysfs_slab_remove() uses the CONFIG_SYSFS variant or not. If kfree(s)
were needed here, we'd be leaking s->name as well.
> > + } else
> > + up_write(&slub_lock);
> > }
> > EXPORT_SYMBOL(kmem_cache_destroy);
> >
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: slub: [RFC] free slabs without holding locks.
2011-07-14 0:25 ` David Rientjes
@ 2011-07-14 14:20 ` Christoph Lameter
2011-07-14 15:35 ` slub: free slabs without holding locks (V2) Christoph Lameter
0 siblings, 1 reply; 7+ messages in thread
From: Christoph Lameter @ 2011-07-14 14:20 UTC (permalink / raw)
To: David Rientjes; +Cc: Pekka Enberg, linux-mm
On Wed, 13 Jul 2011, David Rientjes wrote:
> > > spin_unlock_irqrestore(&n->list_lock, flags);
> > > +
> > > + list_for_each_entry_safe(page, h, &empty, lru)
> > > + discard_slab(s, page);
> > > +
> > > + if (!list_empty(&n->partial))
> > > + list_for_each_entry(page, &n->partial, lru)
> > > + list_slab_objects(s, page,
> > > + "Objects remaining on kmem_cache_close()");
> > > }
> > >
> > > /*
>
> The last iteration to check for any pages remaining on the partial list is
> not safe because partial list manipulation is protected by list_lock.
> That needs to be fixed by testing for page->inuse during the iteration
> while still holding the lock and dropping the later iteration all
> together.
At this point no other process can be accessing the slab anymore. No need
for the list_lock
> > > @@ -2709,9 +2716,9 @@ void kmem_cache_destroy(struct kmem_cach
> > > }
> > > if (s->flags & SLAB_DESTROY_BY_RCU)
> > > rcu_barrier();
> > > - sysfs_slab_remove(s);
> > > - }
> > > - up_write(&slub_lock);
> > > + kfree(s);
>
> Why the new kfree() here? If the refcount is 0, then this should be
> handled when the sysfs entry is released regardless of whether
> sysfs_slab_remove() uses the CONFIG_SYSFS variant or not. If kfree(s)
> were needed here, we'd be leaking s->name as well.
Right. I will fix that.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* slub: free slabs without holding locks (V2)
2011-07-14 14:20 ` Christoph Lameter
@ 2011-07-14 15:35 ` Christoph Lameter
2011-07-31 16:19 ` Pekka Enberg
0 siblings, 1 reply; 7+ messages in thread
From: Christoph Lameter @ 2011-07-14 15:35 UTC (permalink / raw)
To: David Rientjes; +Cc: Pekka Enberg, linux-mm
There are two situations in which slub holds a lock while releasing
pages:
A. During kmem_cache_shrink()
B. During kmem_cache_close()
For A build a list while holding the lock and then release the pages
later. In case of B we are the last remaining user of the slab so
there is no need to take the listlock.
After this patch all calls to the page allocator to free pages are
done without holding any locks.
V1->V2. Remove kfree. Avoid locking in free_partial. Drop slub_lock
too.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 32 +++++++++++++-------------------
1 file changed, 13 insertions(+), 19 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-07-14 09:41:03.587673788 -0500
+++ linux-2.6/mm/slub.c 2011-07-14 10:32:04.997654187 -0500
@@ -2652,13 +2652,13 @@ static void list_slab_objects(struct kme
/*
* Attempt to free all partial slabs on a node.
+ * This is called from kmem_cache_close(). We must be the last thread
+ * using the cache and therefore we do not need to lock anymore.
*/
static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
{
- unsigned long flags;
struct page *page, *h;
- spin_lock_irqsave(&n->list_lock, flags);
list_for_each_entry_safe(page, h, &n->partial, lru) {
if (!page->inuse) {
__remove_partial(n, page);
@@ -2668,7 +2668,6 @@ static void free_partial(struct kmem_cac
"Objects remaining on kmem_cache_close()");
}
}
- spin_unlock_irqrestore(&n->list_lock, flags);
}
/*
@@ -2702,6 +2701,7 @@ void kmem_cache_destroy(struct kmem_cach
s->refcount--;
if (!s->refcount) {
list_del(&s->list);
+ up_write(&slub_lock);
if (kmem_cache_close(s)) {
printk(KERN_ERR "SLUB %s: %s called for cache that "
"still has objects.\n", s->name, __func__);
@@ -2710,8 +2710,8 @@ void kmem_cache_destroy(struct kmem_cach
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
sysfs_slab_remove(s);
- }
- up_write(&slub_lock);
+ } else
+ up_write(&slub_lock);
}
EXPORT_SYMBOL(kmem_cache_destroy);
@@ -2993,29 +2993,23 @@ int kmem_cache_shrink(struct kmem_cache
* list_lock. page->inuse here is the upper limit.
*/
list_for_each_entry_safe(page, t, &n->partial, lru) {
- if (!page->inuse && slab_trylock(page)) {
- /*
- * Must hold slab lock here because slab_free
- * may have freed the last object and be
- * waiting to release the slab.
- */
- __remove_partial(n, page);
- slab_unlock(page);
- discard_slab(s, page);
- } else {
- list_move(&page->lru,
- slabs_by_inuse + page->inuse);
- }
+ list_move(&page->lru, slabs_by_inuse + page->inuse);
+ if (!page->inuse)
+ n->nr_partial--;
}
/*
* Rebuild the partial list with the slabs filled up most
* first and the least used slabs at the end.
*/
- for (i = objects - 1; i >= 0; i--)
+ for (i = objects - 1; i > 0; i--)
list_splice(slabs_by_inuse + i, n->partial.prev);
spin_unlock_irqrestore(&n->list_lock, flags);
+
+ /* Release empty slabs */
+ list_for_each_entry_safe(page, t, slabs_by_inuse, lru)
+ discard_slab(s, page);
}
kfree(slabs_by_inuse);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: slub: free slabs without holding locks (V2)
2011-07-14 15:35 ` slub: free slabs without holding locks (V2) Christoph Lameter
@ 2011-07-31 16:19 ` Pekka Enberg
2011-08-01 15:30 ` Christoph Lameter
0 siblings, 1 reply; 7+ messages in thread
From: Pekka Enberg @ 2011-07-31 16:19 UTC (permalink / raw)
To: Christoph Lameter; +Cc: David Rientjes, linux-mm
On Thu, Jul 14, 2011 at 6:35 PM, Christoph Lameter <cl@linux.com> wrote:
> There are two situations in which slub holds a lock while releasing
> pages:
>
> A. During kmem_cache_shrink()
> B. During kmem_cache_close()
>
> For A build a list while holding the lock and then release the pages
> later. In case of B we are the last remaining user of the slab so
> there is no need to take the listlock.
>
> After this patch all calls to the page allocator to free pages are
> done without holding any locks.
>
> V1->V2. Remove kfree. Avoid locking in free_partial. Drop slub_lock
> too.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
I'd like to merge this patch but it doesn't apply on top of Linus'
tree. Care to resend?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: slub: free slabs without holding locks (V2)
2011-07-31 16:19 ` Pekka Enberg
@ 2011-08-01 15:30 ` Christoph Lameter
0 siblings, 0 replies; 7+ messages in thread
From: Christoph Lameter @ 2011-08-01 15:30 UTC (permalink / raw)
To: Pekka Enberg; +Cc: David Rientjes, linux-mm
On Sun, 31 Jul 2011, Pekka Enberg wrote:
> I'd like to merge this patch but it doesn't apply on top of Linus'
> tree. Care to resend?
Patch rediffed against todays upstream tree.
Subject: slub: free slabs without holding locks (V2)
There are two situations in which slub holds a lock while releasing
pages:
A. During kmem_cache_shrink()
B. During kmem_cache_close()
For A build a list while holding the lock and then release the pages
later. In case of B we are the last remaining user of the slab so
there is no need to take the listlock.
After this patch all calls to the page allocator to free pages are
done without holding any spinlocks. kmem_cache_destroy() will still
hold the slub_lock semaphore.
V1->V2. Remove kfree. Avoid locking in free_partial.
Signed-off-by: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-08-01 10:22:37.455874973 -0500
+++ linux-2.6/mm/slub.c 2011-08-01 10:24:38.525874198 -0500
@@ -2968,13 +2968,13 @@ static void list_slab_objects(struct kme
/*
* Attempt to free all partial slabs on a node.
+ * This is called from kmem_cache_close(). We must be the last thread
+ * using the cache and therefore we do not need to lock anymore.
*/
static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
{
- unsigned long flags;
struct page *page, *h;
- spin_lock_irqsave(&n->list_lock, flags);
list_for_each_entry_safe(page, h, &n->partial, lru) {
if (!page->inuse) {
remove_partial(n, page);
@@ -2984,7 +2984,6 @@ static void free_partial(struct kmem_cac
"Objects remaining on kmem_cache_close()");
}
}
- spin_unlock_irqrestore(&n->list_lock, flags);
}
/*
@@ -3018,6 +3017,7 @@ void kmem_cache_destroy(struct kmem_cach
s->refcount--;
if (!s->refcount) {
list_del(&s->list);
+ up_write(&slub_lock);
if (kmem_cache_close(s)) {
printk(KERN_ERR "SLUB %s: %s called for cache that "
"still has objects.\n", s->name, __func__);
@@ -3026,8 +3026,8 @@ void kmem_cache_destroy(struct kmem_cach
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
sysfs_slab_remove(s);
- }
- up_write(&slub_lock);
+ } else
+ up_write(&slub_lock);
}
EXPORT_SYMBOL(kmem_cache_destroy);
@@ -3345,23 +3345,23 @@ int kmem_cache_shrink(struct kmem_cache
* list_lock. page->inuse here is the upper limit.
*/
list_for_each_entry_safe(page, t, &n->partial, lru) {
- if (!page->inuse) {
- remove_partial(n, page);
- discard_slab(s, page);
- } else {
- list_move(&page->lru,
- slabs_by_inuse + page->inuse);
- }
+ list_move(&page->lru, slabs_by_inuse + page->inuse);
+ if (!page->inuse)
+ n->nr_partial--;
}
/*
* Rebuild the partial list with the slabs filled up most
* first and the least used slabs at the end.
*/
- for (i = objects - 1; i >= 0; i--)
+ for (i = objects - 1; i > 0; i--)
list_splice(slabs_by_inuse + i, n->partial.prev);
spin_unlock_irqrestore(&n->list_lock, flags);
+
+ /* Release empty slabs */
+ list_for_each_entry_safe(page, t, slabs_by_inuse, lru)
+ discard_slab(s, page);
}
kfree(slabs_by_inuse);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2011-08-01 15:31 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-20 21:16 slub: [RFC] free slabs without holding locks Christoph Lameter
2011-07-07 19:04 ` Pekka Enberg
2011-07-14 0:25 ` David Rientjes
2011-07-14 14:20 ` Christoph Lameter
2011-07-14 15:35 ` slub: free slabs without holding locks (V2) Christoph Lameter
2011-07-31 16:19 ` Pekka Enberg
2011-08-01 15:30 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).