From: Jesper Dangaard Brouer <brouer@redhat.com>
To: linux-mm@kvack.org, Christoph Lameter <cl@linux.com>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Jesper Dangaard Brouer <brouer@redhat.com>
Subject: [PATCH 01/10] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk
Date: Thu, 07 Jan 2016 15:03:38 +0100 [thread overview]
Message-ID: <20160107140338.28907.48580.stgit@firesoul> (raw)
In-Reply-To: <20160107140253.28907.5469.stgit@firesoul>
This change is primarily an attempt to make it easier to realize the
optimizations the compiler performs in-case CONFIG_MEMCG_KMEM is not
enabled.
Performance wise, even when CONFIG_MEMCG_KMEM is compiled in, the
overhead is zero. This is because, as long as no process have
enabled kmem cgroups accounting, the assignment is replaced by
asm-NOP operations. This is possible because memcg_kmem_enabled()
uses a static_key_false() construct.
It also helps readability as it avoid accessing the p[] array like:
p[size - 1] which "expose" that the array is processed backwards
inside helper function build_detached_freelist().
Lastly this also makes the code more robust, in error case like
passing NULL pointers in the array. Which were previously handled
before commit 033745189b1b ("slub: add missing kmem cgroup
support to kmem_cache_free_bulk").
Fixes: 033745189b1b ("slub: add missing kmem cgroup support to kmem_cache_free_bulk")
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
mm/slub.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 46997517406e..0538e45e1964 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2833,8 +2833,9 @@ struct detached_freelist {
* synchronization primitive. Look ahead in the array is limited due
* to performance reasons.
*/
-static int build_detached_freelist(struct kmem_cache *s, size_t size,
- void **p, struct detached_freelist *df)
+static inline
+int build_detached_freelist(struct kmem_cache **s, size_t size,
+ void **p, struct detached_freelist *df)
{
size_t first_skipped_index = 0;
int lookahead = 3;
@@ -2850,8 +2851,11 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
if (!object)
return 0;
+ /* Support for memcg, compiler can optimize this out */
+ *s = cache_from_obj(*s, object);
+
/* Start new detached freelist */
- set_freepointer(s, object, NULL);
+ set_freepointer(*s, object, NULL);
df->page = virt_to_head_page(object);
df->tail = object;
df->freelist = object;
@@ -2866,7 +2870,7 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
/* df->page is always set at this point */
if (df->page == virt_to_head_page(object)) {
/* Opportunity build freelist */
- set_freepointer(s, object, df->freelist);
+ set_freepointer(*s, object, df->freelist);
df->freelist = object;
df->cnt++;
p[size] = NULL; /* mark object processed */
@@ -2885,7 +2889,6 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
return first_skipped_index;
}
-
/* Note that interrupts must be enabled when calling this function. */
void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
{
@@ -2894,12 +2897,9 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
do {
struct detached_freelist df;
- struct kmem_cache *s;
-
- /* Support for memcg */
- s = cache_from_obj(orig_s, p[size - 1]);
+ struct kmem_cache *s = orig_s;
- size = build_detached_freelist(s, size, p, &df);
+ size = build_detached_freelist(&s, size, p, &df);
if (unlikely(!df.page))
continue;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-01-07 14:03 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-07 14:03 [PATCH 00/10] MM: More bulk API work Jesper Dangaard Brouer
2016-01-07 14:03 ` Jesper Dangaard Brouer [this message]
2016-01-07 15:54 ` [PATCH 01/10] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk Christoph Lameter
2016-01-07 17:41 ` Jesper Dangaard Brouer
2016-01-08 2:58 ` Joonsoo Kim
2016-01-08 11:05 ` Jesper Dangaard Brouer
2016-01-07 14:03 ` [PATCH 02/10] mm/slab: move SLUB alloc hooks to common mm/slab.h Jesper Dangaard Brouer
2016-01-07 14:03 ` [PATCH 03/10] mm: fault-inject take over bootstrap kmem_cache check Jesper Dangaard Brouer
2016-01-07 14:03 ` [PATCH 04/10] slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-08 3:05 ` Joonsoo Kim
2016-01-07 14:03 ` [PATCH 05/10] mm: kmemcheck skip object if slab allocation failed Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 06/10] slab: use slab_post_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 07/10] slab: implement bulk alloc in SLAB allocator Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 08/10] slab: avoid running debug SLAB code with IRQs disabled for alloc_bulk Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 09/10] slab: implement bulk free in SLAB allocator Jesper Dangaard Brouer
2016-01-07 14:04 ` [PATCH 10/10] mm: new API kfree_bulk() for SLAB+SLUB allocators Jesper Dangaard Brouer
2016-01-08 3:03 ` Joonsoo Kim
2016-01-08 11:20 ` Jesper Dangaard Brouer
2016-01-07 18:54 ` [PATCH 00/10] MM: More bulk API work Linus Torvalds
2016-01-12 15:13 ` [PATCH V2 00/11] " Jesper Dangaard Brouer
2016-01-12 15:13 ` [PATCH V2 01/11] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk Jesper Dangaard Brouer
2016-01-12 15:13 ` [PATCH V2 02/11] mm/slab: move SLUB alloc hooks to common mm/slab.h Jesper Dangaard Brouer
2016-01-12 15:14 ` [PATCH V2 03/11] mm: fault-inject take over bootstrap kmem_cache check Jesper Dangaard Brouer
2016-01-12 15:14 ` [PATCH V2 04/11] slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-12 15:14 ` [PATCH V2 05/11] mm: kmemcheck skip object if slab allocation failed Jesper Dangaard Brouer
2016-01-12 15:14 ` [PATCH V2 06/11] slab: use slab_post_alloc_hook in SLAB allocator shared with SLUB Jesper Dangaard Brouer
2016-01-12 15:15 ` [PATCH V2 07/11] slab: implement bulk alloc in SLAB allocator Jesper Dangaard Brouer
2016-01-12 15:15 ` [PATCH V2 08/11] slab: avoid running debug SLAB code with IRQs disabled for alloc_bulk Jesper Dangaard Brouer
2016-01-12 15:15 ` [PATCH V2 09/11] slab: implement bulk free in SLAB allocator Jesper Dangaard Brouer
2016-01-12 15:16 ` [PATCH V2 10/11] mm: new API kfree_bulk() for SLAB+SLUB allocators Jesper Dangaard Brouer
2016-01-12 15:16 ` [PATCH V2 11/11] mm: fix some spelling Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160107140338.28907.48580.stgit@firesoul \
--to=brouer@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=torvalds@linux-foundation.org \
--cc=vdavydov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).