From: Vladimir Davydov <vdavydov@virtuozzo.com>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: linux-kernel@vger.kernel.org, Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
Dmitry Safonov <dsafonov@virtuozzo.com>,
Daniel Vetter <daniel.vetter@ffwll.ch>,
Dave Gordon <david.s.gordon@intel.com>,
linux-mm@kvack.org
Subject: Re: [PATCH v2] mm/slub: Run free_partial() outside of the kmem_cache_node->list_lock
Date: Tue, 9 Aug 2016 18:45:39 +0300 [thread overview]
Message-ID: <20160809154539.GG1983@esperanza> (raw)
In-Reply-To: <1470756466-12493-1-git-send-email-chris@chris-wilson.co.uk>
On Tue, Aug 09, 2016 at 04:27:46PM +0100, Chris Wilson wrote:
...
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff45..58f0eb6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3479,6 +3479,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
> */
> static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> {
> + LIST_HEAD(partial_list);
nit: slabs added to this list are not partially used - they are free, so
let's call it 'free_slabs' or 'discard_list' or just 'discard', please
> struct page *page, *h;
>
> BUG_ON(irqs_disabled());
> @@ -3486,13 +3487,16 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> list_for_each_entry_safe(page, h, &n->partial, lru) {
> if (!page->inuse) {
> remove_partial(n, page);
> - discard_slab(s, page);
> + list_add(&page->lru, &partial_list);
If there are objects left in the cache on destruction, the cache won't
be destroyed. Instead it will be left on the slab_list and can get
reused later. So we should use list_move() here to always leave
n->partial in a consistent state, even in case of a leak.
> } else {
> list_slab_objects(s, page,
> "Objects remaining in %s on __kmem_cache_shutdown()");
> }
> }
> spin_unlock_irq(&n->list_lock);
> +
> + list_for_each_entry_safe(page, h, &partial_list, lru)
> + discard_slab(s, page);
> }
>
> /*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-08-09 15:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-09 14:46 [PATCH] mm/slub: Run free_partial() outside of the kmem_cache_node->list_lock Chris Wilson
2016-08-09 15:17 ` Vladimir Davydov
2016-08-09 15:27 ` [PATCH v2] " Chris Wilson
2016-08-09 15:45 ` Vladimir Davydov [this message]
2016-08-09 15:52 ` Chris Wilson
2016-08-09 16:06 ` Vladimir Davydov
2016-08-09 16:11 ` [PATCH v3] " Chris Wilson
2016-08-09 16:21 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160809154539.GG1983@esperanza \
--to=vdavydov@virtuozzo.com \
--cc=akpm@linux-foundation.org \
--cc=chris@chris-wilson.co.uk \
--cc=cl@linux.com \
--cc=daniel.vetter@ffwll.ch \
--cc=david.s.gordon@intel.com \
--cc=dsafonov@virtuozzo.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).