public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Davidlohr Bueso <dbueso@suse.de>
To: mingo@kernel.org
Cc: peterz@infradead.org, dave@stgolabs.net, linux-kernel@vger.kernel.org
Subject: [PATCH -tip 0/2] kernel/smp: Small csd_lock optimizations
Date: Wed,  9 Mar 2016 17:55:34 -0800	[thread overview]
Message-ID: <1457574936-19065-1-git-send-email-dbueso@suse.de> (raw)

From: Davidlohr Bueso <dave@stgolabs.net>

Hi,

Justifications are in each patch, there is slight impact (patch 2)
on some tlb flushing intensive benchmarks (albeit using ipi batching
nowadays).  Specifically for the pft
benchmark, on a 12-core box:

pft faults
                              4.4                         4.4
                          vanilla                         smp
Hmean    faults/cpu-1   801432.1608 (  0.00%)  795719.8859 ( -0.71%)
Hmean    faults/cpu-3   702578.6659 (  0.00%)  752796.6960 (  7.15%)
Hmean    faults/cpu-5   606080.3473 (  0.00%)  595890.0451 ( -1.68%)
Hmean    faults/cpu-7   460369.0724 (  0.00%)  485283.6343 (  5.41%)
Hmean    faults/cpu-12  294445.4701 (  0.00%)  298300.6011 (  1.31%)
Hmean    faults/cpu-18  213156.0860 (  0.00%)  213584.2741 (  0.20%)
Hmean    faults/cpu-24  153104.2995 (  0.00%)  153198.8473 (  0.06%)
Hmean    faults/sec-1   796329.3184 (  0.00%)  614222.4594 (-22.87%)
Hmean    faults/sec-3  1947806.7372 (  0.00%) 2169267.1582 ( 11.37%)
Hmean    faults/sec-5  2611152.0422 (  0.00%) 2544652.6871 ( -2.55%)
Hmean    faults/sec-7  2493705.4668 (  0.00%) 2674847.5270 (  7.26%)
Hmean    faults/sec-12 2583139.7724 (  0.00%) 2614404.6002 (  1.21%)
Hmean    faults/sec-18 2661410.8170 (  0.00%) 2683427.0703 (  0.83%)
Hmean    faults/sec-24 2670463.4814 (  0.00%) 2666221.6332 ( -0.16%)
Stddev   faults/cpu-1    27537.6676 (  0.00%)   25753.4945 (  6.48%)
Stddev   faults/cpu-3    62616.8041 (  0.00%)   44728.0990 ( 28.57%)
Stddev   faults/cpu-5    70976.9184 (  0.00%)   74720.5716 ( -5.27%)
Stddev   faults/cpu-7    47426.5952 (  0.00%)   32758.2705 ( 30.93%)
Stddev   faults/cpu-12    6951.8792 (  0.00%)    9097.0782 (-30.86%)
Stddev   faults/cpu-18    4293.1696 (  0.00%)    5826.9446 (-35.73%)
Stddev   faults/cpu-24    3195.0939 (  0.00%)    3373.7230 ( -5.59%)
Stddev   faults/sec-1    27315.3093 (  0.00%)  148601.7795 (-444.02%)
Stddev   faults/sec-3   271560.5941 (  0.00%)  193681.0177 ( 28.68%)
Stddev   faults/sec-5   429633.7378 (  0.00%)  458426.3306 ( -6.70%)
Stddev   faults/sec-7   338229.0746 (  0.00%)  226146.3450 ( 33.14%)
Stddev   faults/sec-12   57766.4604 (  0.00%)   82734.3638 (-43.22%)
Stddev   faults/sec-18  118572.1909 (  0.00%)  134966.7210 (-13.83%)
Stddev   faults/sec-24   57452.7350 (  0.00%)   57542.7755 ( -0.16%)

                 4.4         4.4
             vanilla         smp
User           11.91       11.85
System        197.11      194.69
Elapsed        44.24       40.26

While the single thread is an abnormality, overall we don't seem
to do any harm (noise range). Could be give or take, but overall
the patches at least make some sense afaict.

Thanks!

Davidlohr Bueso (2):
  kernel/smp: Explicitly inline cds_lock helpers
  kernel/smp: Use make csd_lock_wait be smp_cond_acquire

 kernel/smp.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

--
2.1.4

             reply	other threads:[~2016-03-10  1:56 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-10  1:55 Davidlohr Bueso [this message]
2016-03-10  1:55 ` [PATCH 1/2] kernel/smp: Explicitly inline cds_lock helpers Davidlohr Bueso
2016-03-10 11:05   ` [tip:locking/core] locking/csd_lock: Explicitly inline csd_lock*() helpers tip-bot for Davidlohr Bueso
2016-03-10  1:55 ` [PATCH 2/2] kernel/smp: Use make csd_lock_wait be smp_cond_acquire Davidlohr Bueso
2016-03-10 11:06   ` [tip:locking/core] locking/csd_lock: Use smp_cond_acquire() in csd_lock_wait() tip-bot for Davidlohr Bueso
2016-03-10  9:17 ` [PATCH -tip 0/2] kernel/smp: Small csd_lock optimizations Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1457574936-19065-1-git-send-email-dbueso@suse.de \
    --to=dbueso@suse.de \
    --cc=dave@stgolabs.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox