From: Mike Galbraith <mgalbraith@suse.de>
To: Matt Fleming <matt@codeblueprint.co.uk>,
Peter Zijlstra <peterz@infradead.org>
Cc: Chris Mason <clm@fb.com>, Ingo Molnar <mingo@kernel.org>,
linux-kernel@vger.kernel.org
Subject: Re: sched: tweak select_idle_sibling to look for idle threads
Date: Fri, 06 May 2016 20:54:38 +0200 [thread overview]
Message-ID: <1462560878.5119.9.camel@suse.de> (raw)
In-Reply-To: <20160505220306.GO2839@codeblueprint.co.uk>
On Thu, 2016-05-05 at 23:03 +0100, Matt Fleming wrote:
> One thing I haven't yet done is twiddled the bits individually to see
> what the best combination is. Have you settled on the right settings
> yet?
Lighter configs, revert sched/fair: Fix fairness issue on migration,
twiddle knobs. Added an IDLE_SIBLING knob to ~virgin master.. only
sorta virgin because I always throttle nohz.
1 x i4790
master
for i in 1 2 4 8; do tbench.sh $i 30 2>&1|grep Throughput; done
Throughput 871.785 MB/sec 1 clients 1 procs max_latency=0.324 ms
Throughput 1514.5 MB/sec 2 clients 2 procs max_latency=0.411 ms
Throughput 2722.43 MB/sec 4 clients 4 procs max_latency=2.400 ms
Throughput 4334.46 MB/sec 8 clients 8 procs max_latency=3.561 ms
echo NO_IDLE_SIBLING > /sys/kernel/debug/sched_features
Throughput 1078.69 MB/sec 1 clients 1 procs max_latency=2.274 ms
Throughput 2130.33 MB/sec 2 clients 2 procs max_latency=1.451 ms
Throughput 3484.18 MB/sec 4 clients 4 procs max_latency=3.430 ms
Throughput 4423.69 MB/sec 8 clients 8 procs max_latency=5.363 ms
masterx
for i in 1 2 4 8; do tbench.sh $i 30 2>&1|grep Throughput; done
Throughput 707.673 MB/sec 1 clients 1 procs max_latency=2.279 ms
Throughput 1503.55 MB/sec 2 clients 2 procs max_latency=0.695 ms
Throughput 2527.73 MB/sec 4 clients 4 procs max_latency=2.321 ms
Throughput 4291.26 MB/sec 8 clients 8 procs max_latency=3.815 ms
echo NO_IDLE_CPU > /sys/kernel/debug/sched_features
homer:~ # for i in 1 2 4 8; do tbench.sh $i 30 2>&1|grep Throughput; done
Throughput 865.936 MB/sec 1 clients 1 procs max_latency=0.411 ms
Throughput 1586.41 MB/sec 2 clients 2 procs max_latency=2.293 ms
Throughput 2638.39 MB/sec 4 clients 4 procs max_latency=2.037 ms
Throughput 4405.43 MB/sec 8 clients 8 procs max_latency=3.581 ms
+ echo NO_AVG_CPU > /sys/kernel/debug/sched_features
+ echo IDLE_SMT > /sys/kernel/debug/sched_features
Throughput 697.126 MB/sec 1 clients 1 procs max_latency=2.220 ms
Throughput 1562.82 MB/sec 2 clients 2 procs max_latency=0.526 ms
Throughput 2620.62 MB/sec 4 clients 4 procs max_latency=6.460 ms
Throughput 4345.13 MB/sec 8 clients 8 procs max_latency=27.921 ms
4 x E7-8890
master
for i in 1 2 4 8 16 32 64 128 256; do tbench.sh $i 30 2>&1| grep Throughput; done
Throughput 615.663 MB/sec 1 clients 1 procs max_latency=0.087 ms
Throughput 1171.53 MB/sec 2 clients 2 procs max_latency=0.087 ms
Throughput 2251.22 MB/sec 4 clients 4 procs max_latency=0.078 ms
Throughput 4090.76 MB/sec 8 clients 8 procs max_latency=0.801 ms
Throughput 7695.92 MB/sec 16 clients 16 procs max_latency=0.235 ms
Throughput 15152 MB/sec 32 clients 32 procs max_latency=0.693 ms
Throughput 21628.2 MB/sec 64 clients 64 procs max_latency=4.666 ms
Throughput 43185.7 MB/sec 128 clients 128 procs max_latency=7.280 ms
Throughput 72144.5 MB/sec 256 clients 256 procs max_latency=8.194 ms
echo NO_IDLE_SIBLING > /sys/kernel/debug/sched_features
Throughput 954.593 MB/sec 1 clients 1 procs max_latency=0.185 ms
Throughput 1882.65 MB/sec 2 clients 2 procs max_latency=0.278 ms
Throughput 3457.03 MB/sec 4 clients 4 procs max_latency=0.431 ms
Throughput 6279.38 MB/sec 8 clients 8 procs max_latency=0.730 ms
Throughput 11170.4 MB/sec 16 clients 16 procs max_latency=0.500 ms
Throughput 21940.9 MB/sec 32 clients 32 procs max_latency=0.475 ms
Throughput 41738.8 MB/sec 64 clients 64 procs max_latency=3.669 ms
Throughput 67634.6 MB/sec 128 clients 128 procs max_latency=6.676 ms
Throughput 76299.7 MB/sec 256 clients 256 procs max_latency=7.878 ms
masterx
for i in 1 2 4 8 16 32 64 128 256; do tbench.sh $i 30 2>&1| grep Throughput; done
Throughput 587.956 MB/sec 1 clients 1 procs max_latency=0.124 ms
Throughput 1140.16 MB/sec 2 clients 2 procs max_latency=0.476 ms
Throughput 2296.03 MB/sec 4 clients 4 procs max_latency=0.142 ms
Throughput 4116.65 MB/sec 8 clients 8 procs max_latency=0.464 ms
Throughput 7820.27 MB/sec 16 clients 16 procs max_latency=0.238 ms
Throughput 14899.2 MB/sec 32 clients 32 procs max_latency=0.321 ms
Throughput 21909.8 MB/sec 64 clients 64 procs max_latency=0.905 ms
Throughput 35495.2 MB/sec 128 clients 128 procs max_latency=6.158 ms
Throughput 75863.2 MB/sec 256 clients 256 procs max_latency=7.650 ms
echo NO_IDLE_CPU > /sys/kernel/debug/sched_features
Throughput 555.15 MB/sec 1 clients 1 procs max_latency=0.096 ms
Throughput 1195.12 MB/sec 2 clients 2 procs max_latency=0.131 ms
Throughput 2276.97 MB/sec 4 clients 4 procs max_latency=0.105 ms
Throughput 4248.14 MB/sec 8 clients 8 procs max_latency=0.131 ms
Throughput 7860.86 MB/sec 16 clients 16 procs max_latency=0.210 ms
Throughput 15178.6 MB/sec 32 clients 32 procs max_latency=0.229 ms
Throughput 21523.9 MB/sec 64 clients 64 procs max_latency=0.842 ms
Throughput 31082.1 MB/sec 128 clients 128 procs max_latency=7.311 ms
Throughput 75887.9 MB/sec 256 clients 256 procs max_latency=7.764 ms
+ echo NO_AVG_CPU > /sys/kernel/debug/sched_features
Throughput 598.063 MB/sec 1 clients 1 procs max_latency=0.131 ms
Throughput 1140.2 MB/sec 2 clients 2 procs max_latency=0.092 ms
Throughput 2268.68 MB/sec 4 clients 4 procs max_latency=0.170 ms
Throughput 4259.7 MB/sec 8 clients 8 procs max_latency=0.212 ms
Throughput 7904.15 MB/sec 16 clients 16 procs max_latency=0.191 ms
Throughput 14840 MB/sec 32 clients 32 procs max_latency=0.279 ms
Throughput 21701.5 MB/sec 64 clients 64 procs max_latency=0.856 ms
Throughput 38945 MB/sec 128 clients 128 procs max_latency=7.501 ms
Throughput 75669.4 MB/sec 256 clients 256 procs max_latency=14.984 ms
+ echo IDLE_SMT > /sys/kernel/debug/sched_features
Throughput 592.799 MB/sec 1 clients 1 procs max_latency=0.120 ms
Throughput 1208.28 MB/sec 2 clients 2 procs max_latency=0.078 ms
Throughput 2319.22 MB/sec 4 clients 4 procs max_latency=0.141 ms
Throughput 4196.64 MB/sec 8 clients 8 procs max_latency=0.253 ms
Throughput 7816.47 MB/sec 16 clients 16 procs max_latency=0.117 ms
Throughput 14990.8 MB/sec 32 clients 32 procs max_latency=0.189 ms
Throughput 21809.4 MB/sec 64 clients 64 procs max_latency=0.832 ms
Throughput 44813 MB/sec 128 clients 128 procs max_latency=7.930 ms
Throughput 75978.1 MB/sec 256 clients 256 procs max_latency=7.337 ms
next prev parent reply other threads:[~2016-05-06 18:54 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-05 18:08 [PATCH RFC] select_idle_sibling experiments Chris Mason
2016-04-05 18:43 ` Bastien Bastien Philbert
2016-04-05 19:28 ` Chris Mason
2016-04-05 20:03 ` Matt Fleming
2016-04-05 21:05 ` Bastien Philbert
2016-04-06 0:44 ` Chris Mason
2016-04-06 7:27 ` Mike Galbraith
2016-04-06 13:36 ` Chris Mason
2016-04-09 17:30 ` Chris Mason
2016-04-12 21:45 ` Matt Fleming
2016-04-13 3:40 ` Mike Galbraith
2016-04-13 15:54 ` Chris Mason
2016-04-28 12:00 ` Peter Zijlstra
2016-04-28 13:17 ` Mike Galbraith
2016-05-02 5:35 ` Mike Galbraith
2016-04-07 15:17 ` Chris Mason
2016-04-09 19:05 ` sched: tweak select_idle_sibling to look for idle threads Chris Mason
2016-04-10 10:04 ` Mike Galbraith
2016-04-10 12:35 ` Chris Mason
2016-04-10 12:46 ` Mike Galbraith
2016-04-10 19:55 ` Chris Mason
2016-04-11 4:54 ` Mike Galbraith
2016-04-12 0:30 ` Chris Mason
2016-04-12 4:44 ` Mike Galbraith
2016-04-12 13:27 ` Chris Mason
2016-04-12 18:16 ` Mike Galbraith
2016-04-12 20:07 ` Chris Mason
2016-04-13 3:18 ` Mike Galbraith
2016-04-13 13:44 ` Chris Mason
2016-04-13 14:22 ` Mike Galbraith
2016-04-13 14:36 ` Chris Mason
2016-04-13 15:05 ` Mike Galbraith
2016-04-13 15:34 ` Mike Galbraith
2016-04-30 12:47 ` Peter Zijlstra
2016-05-01 7:12 ` Mike Galbraith
2016-05-01 8:53 ` Peter Zijlstra
2016-05-01 9:20 ` Mike Galbraith
2016-05-07 1:24 ` Yuyang Du
2016-05-08 8:08 ` Mike Galbraith
2016-05-08 18:57 ` Yuyang Du
2016-05-09 3:45 ` Mike Galbraith
2016-05-08 20:22 ` Yuyang Du
2016-05-09 7:44 ` Mike Galbraith
2016-05-09 1:13 ` Yuyang Du
2016-05-09 9:39 ` Mike Galbraith
2016-05-09 23:26 ` Yuyang Du
2016-05-10 7:49 ` Mike Galbraith
2016-05-10 15:26 ` Mike Galbraith
2016-05-10 19:16 ` Yuyang Du
2016-05-11 4:17 ` Mike Galbraith
2016-05-11 1:23 ` Yuyang Du
2016-05-11 9:56 ` Mike Galbraith
2016-05-18 6:41 ` Mike Galbraith
2016-05-09 3:52 ` Mike Galbraith
2016-05-08 20:31 ` Yuyang Du
2016-05-02 8:46 ` Peter Zijlstra
2016-05-02 14:50 ` Mike Galbraith
2016-05-02 14:58 ` Peter Zijlstra
2016-05-02 15:47 ` Chris Mason
2016-05-03 14:32 ` Peter Zijlstra
2016-05-03 15:11 ` Chris Mason
2016-05-04 10:37 ` Peter Zijlstra
2016-05-04 15:31 ` Peter Zijlstra
2016-05-05 22:03 ` Matt Fleming
2016-05-06 18:54 ` Mike Galbraith [this message]
2016-05-09 8:33 ` Peter Zijlstra
2016-05-09 8:56 ` Mike Galbraith
2016-05-04 15:45 ` Peter Zijlstra
2016-05-04 17:46 ` Chris Mason
2016-05-05 9:33 ` Peter Zijlstra
2016-05-05 13:58 ` Chris Mason
2016-05-06 7:12 ` Peter Zijlstra
2016-05-06 17:27 ` Chris Mason
2016-05-06 7:25 ` Peter Zijlstra
2016-05-02 17:30 ` Mike Galbraith
2016-05-02 15:01 ` Peter Zijlstra
2016-05-02 16:04 ` Ingo Molnar
2016-05-03 11:31 ` Peter Zijlstra
2016-05-03 18:22 ` Peter Zijlstra
2016-05-02 15:10 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1462560878.5119.9.camel@suse.de \
--to=mgalbraith@suse.de \
--cc=clm@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matt@codeblueprint.co.uk \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).