From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Alexander Duyck <alexander.duyck@gmail.com>,
linux-mm@kvack.org, netdev@vger.kernel.org
Subject: [RFC PATCH] slub: RFC: Improving SLUB performance with 38% on NO-PREEMPT
Date: Thu, 04 Jun 2015 12:31:59 +0200 [thread overview]
Message-ID: <20150604103159.4744.75870.stgit@ivy> (raw)
This patch improves performance of SLUB allocator fastpath with 38% by
avoiding the call to this_cpu_cmpxchg_double() for NO-PREEMPT kernels.
Reviewers please point out why this change is wrong, as such a large
improvement should not be possible ;-)
My primarily motivation for this patch is to understand and
microbenchmark the MM-layer of the kernel, due to an increasing demand
from the networking stack.
This "microbenchmark" is merely to demonstrate the cost of the
instruction CMPXCHG16B (without LOCK prefix).
My microbench is avail on github[1] (reused "qmempool_bench").
The fastpath-reuse (alloc+free cost) (CPU E5-2695):
* 47 cycles(tsc) - 18.948 ns (normal with this_cpu_cmpxchg_double)
* 29 cycles(tsc) - 11.791 ns (with patch)
Thus, the difference deduct the cost of CMPXCHG16B
* Total saved 18 cycles - 7.157ns
* for two CMPXCHG16B (alloc+free): per-inst saved 9 cycles - 3.579ns
* http://instlatx64.atw.hu/ says 9 cycles cost of CMPXCHG16B
This also shows that the cost of this_cpu_cmpxchg_double() in SLUB is
approx 38% of fast-path cost.
[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/qmempool_bench.c
The cunning reviewer would also like to know the cost of disabling
interrupts, on this CPU. Here it is interesting to see how the
save/restore variant is significantly more expensive:
Cost of local IRQ toggling (CPU E5-2695):
* local_irq_{disable,enable}: 7 cycles(tsc) - 2.861 ns
* local_irq_{save,restore} : 37 cycles(tsc) - 14.846 ns
With the additional overhead of local_irq_{disable,enable}, there
would still be a saving of 11 cycles (out of 47) 23%.
---
mm/slub.c | 52 +++++++++++++++++++++++++++++++++++++++-------------
1 files changed, 39 insertions(+), 13 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 54c0876..b31991f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2489,13 +2489,32 @@ redo:
* against code executing on this cpu *not* from access by
* other cpus.
*/
- if (unlikely(!this_cpu_cmpxchg_double(
- s->cpu_slab->freelist, s->cpu_slab->tid,
- object, tid,
- next_object, next_tid(tid)))) {
-
- note_cmpxchg_failure("slab_alloc", s, tid);
- goto redo;
+ if (IS_ENABLED(CONFIG_PREEMPT)) {
+ if (unlikely(!this_cpu_cmpxchg_double(
+ s->cpu_slab->freelist, s->cpu_slab->tid,
+ object, tid,
+ next_object, next_tid(tid)))) {
+
+ note_cmpxchg_failure("slab_alloc", s, tid);
+ goto redo;
+ }
+ } else {
+ // HACK - On a NON-PREEMPT cmpxchg is not necessary(?)
+ __this_cpu_write(s->cpu_slab->tid, next_tid(tid));
+ __this_cpu_write(s->cpu_slab->freelist, next_object);
+ /*
+ * Q: What happens in-case called from interrupt handler?
+ *
+ * If we need to disable (local) IRQs then most of the
+ * saving is lost. E.g. the local_irq_{save,restore}
+ * is too costly.
+ *
+ * Saved (alloc+free): 18 cycles - 7.157ns
+ *
+ * Cost of (CPU E5-2695):
+ * local_irq_{disable,enable}: 7 cycles(tsc) - 2.861 ns
+ * local_irq_{save,restore} : 37 cycles(tsc) - 14.846 ns
+ */
}
prefetch_freepointer(s, next_object);
stat(s, ALLOC_FASTPATH);
@@ -2726,14 +2745,21 @@ redo:
if (likely(page == c->page)) {
set_freepointer(s, object, c->freelist);
- if (unlikely(!this_cpu_cmpxchg_double(
- s->cpu_slab->freelist, s->cpu_slab->tid,
- c->freelist, tid,
- object, next_tid(tid)))) {
+ if (IS_ENABLED(CONFIG_PREEMPT)) {
+ if (unlikely(!this_cpu_cmpxchg_double(
+ s->cpu_slab->freelist, s->cpu_slab->tid,
+ c->freelist, tid,
+ object, next_tid(tid)))) {
- note_cmpxchg_failure("slab_free", s, tid);
- goto redo;
+ note_cmpxchg_failure("slab_free", s, tid);
+ goto redo;
+ }
+ } else {
+ // HACK - On a NON-PREEMPT cmpxchg is not necessary(?)
+ __this_cpu_write(s->cpu_slab->tid, next_tid(tid));
+ __this_cpu_write(s->cpu_slab->freelist, object);
}
+
stat(s, FREE_FASTPATH);
} else
__slab_free(s, page, x, addr);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next reply other threads:[~2015-06-04 10:32 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-04 10:31 Jesper Dangaard Brouer [this message]
2015-06-05 2:37 ` [RFC PATCH] slub: RFC: Improving SLUB performance with 38% on NO-PREEMPT Eric Dumazet
2015-06-08 9:23 ` Jesper Dangaard Brouer
2015-06-08 9:39 ` Christoph Lameter
2015-06-08 9:58 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150604103159.4744.75870.stgit@ivy \
--to=brouer@redhat.com \
--cc=alexander.duyck@gmail.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).