From: Huang Ying <ying.huang@intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>,
LKML <linux-kernel@vger.kernel.org>, LKP ML <lkp@01.org>
Subject: [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
Date: Mon, 09 Feb 2015 13:58:33 +0800 [thread overview]
Message-ID: <1423461513.5968.56.camel@intel.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 4708 bytes --]
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")
testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket
cebde6d681aa45f9 9edfbfed3f544a7830d99b341f
---------------- --------------------------
%stddev %change %stddev
\ | \
1839273 ± 6% +88.2% 3462337 ± 4% hackbench.time.involuntary_context_switches
41965851 ± 5% +5.6% 44307403 ± 1% hackbench.time.voluntary_context_switches
388 ± 39% -58.6% 160 ± 10% sched_debug.cfs_rq[1]:/.tg_load_contrib
12957 ± 14% -60.5% 5117 ± 11% sched_debug.cfs_rq[2]:/.tg_load_avg
30505 ± 14% -57.7% 12905 ± 6% sched_debug.cfs_rq[3]:/.tg_load_avg
2790 ± 24% -65.4% 964 ± 32% sched_debug.cfs_rq[3]:/.blocked_load_avg
2915 ± 23% -62.2% 1101 ± 29% sched_debug.cfs_rq[3]:/.tg_load_contrib
1839273 ± 6% +88.2% 3462337 ± 4% time.involuntary_context_switches
1474 ± 28% -61.7% 565 ± 43% sched_debug.cfs_rq[2]:/.tg_load_contrib
11830 ± 15% +63.0% 19285 ± 11% sched_debug.cpu#4.sched_goidle
19319 ± 29% +91.1% 36913 ± 7% sched_debug.cpu#3.sched_goidle
5899 ± 31% -35.6% 3801 ± 11% sched_debug.cfs_rq[4]:/.blocked_load_avg
5999 ± 30% -34.5% 3929 ± 11% sched_debug.cfs_rq[4]:/.tg_load_contrib
37884 ± 13% -33.5% 25207 ± 7% sched_debug.cfs_rq[4]:/.tg_load_avg
229547 ± 5% +47.9% 339519 ± 5% cpuidle.C1-NHM.usage
35712 ± 3% +31.7% 47036 ± 9% cpuidle.C3-NHM.usage
5010 ± 9% -29.0% 3556 ± 20% sched_debug.cfs_rq[6]:/.blocked_load_avg
5139 ± 9% -28.2% 3690 ± 19% sched_debug.cfs_rq[6]:/.tg_load_contrib
49568 ± 6% +24.8% 61867 ± 7% sched_debug.cpu#1.sched_goidle
26369 ± 35% -42.0% 15289 ± 29% cpuidle.C6-NHM.usage
18 ± 16% +36.5% 25 ± 7% sched_debug.cpu#4.nr_running
1.41 ± 12% -19.3% 1.14 ± 13% perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
25 ± 15% +28.7% 32 ± 9% sched_debug.cpu#3.nr_running
1.63 ± 11% -18.0% 1.34 ± 12% perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
0.57 ± 8% +9.6% 0.62 ± 5% turbostat.CPU%c1
148 ± 11% -16.7% 123 ± 7% sched_debug.cfs_rq[1]:/.load
109 ± 6% +17.1% 128 ± 6% sched_debug.cpu#6.cpu_load[0]
2.41 ± 8% -13.3% 2.09 ± 11% perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
147 ± 12% -16.4% 123 ± 7% sched_debug.cpu#1.load
111 ± 5% +15.4% 129 ± 5% sched_debug.cpu#6.cpu_load[2]
110 ± 5% +14.9% 127 ± 5% sched_debug.cfs_rq[6]:/.runnable_load_avg
112 ± 5% +14.5% 128 ± 4% sched_debug.cpu#6.cpu_load[3]
113 ± 5% +13.2% 128 ± 3% sched_debug.cpu#6.cpu_load[4]
789953 ± 2% -10.8% 704528 ± 4% sched_debug.cpu#3.avg_idle
15471 ± 5% -7.7% 14278 ± 2% sched_debug.cpu#5.curr->pid
2675106 ± 10% +16.2% 3109411 ± 1% sched_debug.cpu#4.nr_switches
2675140 ± 10% +16.2% 3109440 ± 1% sched_debug.cpu#4.sched_count
155201 ± 5% +14.6% 177901 ± 3% softirqs.RCU
8.64 ± 6% -9.6% 7.82 ± 5% perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
2658351 ± 11% +13.7% 3021564 ± 2% sched_debug.cpu#5.sched_count
2658326 ± 11% +13.7% 3021539 ± 2% sched_debug.cpu#5.nr_switches
71443 ± 5% +9.9% 78486 ± 0% vmstat.system.cs
8209 ± 5% +7.3% 8805 ± 0% vmstat.system.in
xps2: Nehalem
Memory: 4G
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang, Ying
[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 51 bytes --]
tbench_srv &
tbench 4 127.0.0.1
killall tbench_srv
[-- Attachment #3: Type: text/plain, Size: 89 bytes --]
_______________________________________________
LKP mailing list
LKP@linux.intel.com
\r
[-- Attachment #4: job.yaml --]
[-- Type: application/x-yaml, Size: 1685 bytes --]
next reply other threads:[~2015-02-09 5:58 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-09 5:58 Huang Ying [this message]
2015-02-09 8:31 ` [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches Peter Zijlstra
[not found] ` <CAC=cRTN3VKmgM4BYUrN=rMfg8stiP-F_oWR+2LSPFR9m1Wmfqg@mail.gmail.com>
2015-02-09 9:27 ` Peter Zijlstra
2015-02-10 7:26 ` Markus Pargmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1423461513.5968.56.camel@intel.com \
--to=ying.huang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox