From: Michael Wang <wangyun@linux.vnet.ibm.com>
To: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@kernel.org>, Alex Shi <alex.shi@intel.com>,
Namhyung Kim <namhyung@kernel.org>, Paul Turner <pjt@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
"Nikunj A. Dadhania" <nikunj@linux.vnet.ibm.com>,
Ram Pai <linuxram@us.ibm.com>
Subject: Re: [PATCH] sched: smart wake-affine
Date: Fri, 05 Jul 2013 12:33:08 +0800 [thread overview]
Message-ID: <51D64C84.5080100@linux.vnet.ibm.com> (raw)
In-Reply-To: <1372997318.7315.23.camel@marge.simpson.net>
On 07/05/2013 12:08 PM, Mike Galbraith wrote:
[snip]
>>
>> Wow, I used to think such issue is very hard to be tracked by
>> benchmarks, is this regression stable?
>
> Yeah, seems to be. I was curious as to why you saw an improvement to
> hackbench, didn't seem there should be any, so though I'd try it on my
> little box on the way to a long weekend. The unexpected happened.
Oh, I think I failed to explain things clearly in comments...
It's not the patch who bring 15% benefit to hackbench, but the
wake-affine stuff itself.
In the prev-test, I removed the whole stuff and find that hackbench
dropped 15%, which means with wake-affine enabled, we will gain 15%
benefit (and that's actually the reason why we don't kill the stuff).
And this idea is try to not harm that 15% benefit, and meanwhile regain
the pgbench lost performance, thus, apply this patch to mainline won't
improve hackbench performance, but improve pgbench performance.
But this regression is really unexpected... I could hardly believe it's
just caused by cache issue now, since the number is not small (10% at
most?).
Have you tried to use more loops and groups? will that show even bigger
regressions?
BTW, is this the results of 10 group and 40 sockets == 400 tasks?
Regards,
Michael Wang
>
>>> pahole said...
>>>
>>> marge:/usr/local/src/kernel/linux-3.x.git # tail virgin
>>> long unsigned int timer_slack_ns; /* 1512 8 */
>>> long unsigned int default_timer_slack_ns; /* 1520 8 */
>>> atomic_t ptrace_bp_refcnt; /* 1528 4 */
>>>
>>> /* size: 1536, cachelines: 24, members: 125 */
>>> /* sum members: 1509, holes: 6, sum holes: 23 */
>>> /* bit holes: 1, sum bit holes: 26 bits */
>>> /* padding: 4 */
>>> /* paddings: 1, sum paddings: 4 */
>>> };
>>>
>>> marge:/usr/local/src/kernel/linux-3.x.git # tail michael
>>> long unsigned int default_timer_slack_ns; /* 1552 8 */
>>> atomic_t ptrace_bp_refcnt; /* 1560 4 */
>>>
>>> /* size: 1568, cachelines: 25, members: 128 */
>>> /* sum members: 1533, holes: 8, sum holes: 31 */
>>> /* bit holes: 1, sum bit holes: 26 bits */
>>> /* padding: 4 */
>>> /* paddings: 1, sum paddings: 4 */
>>> /* last cacheline: 32 bytes */
>>> };
>>>
>>> ..but plugging holes, didn't help, moving this/that around neither, nor
>>> did letting pahole go wild to get the line back. It's plus signs I tell
>>> ya, the evil things must die ;-)
>>
>> Hmm...so the new members kicked some tail members to a new line...or may
>> be totally different when compiler take part in...
>>
>> It's really hard to estimate the influence, especially when the
>> task_struct is still keep changing...
>
> Yeah, could be memory layout crud that disappears with the next
> pull/build. Wouldn't be the first time.
>
> -Mike
>
next prev parent reply other threads:[~2013-07-05 4:33 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-28 5:05 [RFC PATCH] sched: smart wake-affine Michael Wang
2013-06-03 2:28 ` Michael Wang
2013-06-03 3:09 ` Mike Galbraith
2013-06-03 3:26 ` Michael Wang
2013-06-03 3:53 ` Mike Galbraith
2013-06-03 4:52 ` Michael Wang
2013-06-03 5:22 ` Mike Galbraith
2013-06-03 5:50 ` Michael Wang
2013-06-03 6:05 ` Mike Galbraith
2013-06-03 6:31 ` Michael Wang
2013-06-13 3:09 ` Michael Wang
2013-07-02 4:43 ` [PATCH] " Michael Wang
2013-07-02 5:38 ` Mike Galbraith
2013-07-02 5:50 ` Michael Wang
2013-07-02 5:54 ` Mike Galbraith
2013-07-02 6:17 ` Michael Wang
2013-07-02 6:29 ` Mike Galbraith
2013-07-02 6:45 ` Michael Wang
2013-07-02 8:52 ` Peter Zijlstra
2013-07-02 9:35 ` Michael Wang
2013-07-02 9:44 ` Michael Wang
2013-07-04 9:13 ` Peter Zijlstra
2013-07-04 9:38 ` Michael Wang
2013-07-04 10:33 ` Mike Galbraith
2013-07-05 2:47 ` Michael Wang
2013-07-05 4:08 ` Mike Galbraith
2013-07-05 4:33 ` Michael Wang [this message]
2013-07-05 5:41 ` Mike Galbraith
2013-07-05 6:16 ` Michael Wang
2013-07-07 6:43 ` Mike Galbraith
2013-07-08 2:49 ` Michael Wang
2013-07-08 3:12 ` Mike Galbraith
2013-07-08 8:21 ` Peter Zijlstra
2013-07-08 8:49 ` Mike Galbraith
2013-07-08 9:08 ` Michael Wang
2013-07-08 8:58 ` Michael Wang
2013-07-08 18:59 ` Davidlohr Bueso
2013-07-09 2:30 ` Michael Wang
2013-07-09 2:36 ` Davidlohr Bueso
2013-07-09 2:52 ` Michael Wang
2013-07-15 5:13 ` Michael Wang
2013-07-15 5:57 ` Davidlohr Bueso
2013-07-15 6:01 ` Michael Wang
2013-07-18 2:15 ` Michael Wang
2013-07-03 6:10 ` [PATCH v2] " Michael Wang
2013-07-03 8:50 ` Peter Zijlstra
2013-07-03 9:11 ` Michael Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51D64C84.5080100@linux.vnet.ibm.com \
--to=wangyun@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@intel.com \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxram@us.ibm.com \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=nikunj@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).