From: Libo Chen <libo.chen@huawei.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>, <tglx@linutronix.de>,
<mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>,
Greg KH <gregkh@linuxfoundation.org>,
Li Zefan <lizefan@huawei.com>
Subject: Re: balance storm
Date: Tue, 27 May 2014 20:55:20 +0800 [thread overview]
Message-ID: <53848B38.5090408@huawei.com> (raw)
In-Reply-To: <20140527094802.GN30445@twins.programming.kicks-ass.net>
On 2014/5/27 17:48, Peter Zijlstra wrote:
> So:
>
> 1) what kind of weird ass workload is that? Why are you waking up so
> often to do no work?
it's just a testcase, I agree it doesn`t exist in real world.
>
> 2) turning on/off share_pkg_resource is a horrid hack whichever way
> aruond you turn it.
>
> So I suppose this is due to the select_idle_sibling() nonsense again,
> where we assumes L3 is a fair compromise between cheap enough and
> effective enough.
>
> Of course, Intel keeps growing the cpu count covered by L3 to ridiculous
> sizes, 8 cores isn't nowhere near their top silly, which shifts the
> balance, and there's always going to be pathological cases (like the
> proposed workload) where its just always going to suck eggs.
>
> Also, when running 50 such things on a 16 cpu machine, you get roughly 3
> per cpu, since their runtime is stupid low, I would expect it to pretty
> much always hit an idle cpu, which in turn should inhibit the migration.
>
> Then again, maybe the timer slack is causing you grief, resulting in all
> 3 being woken at the same time, instead of having them staggered.
>
> In any case, I'm not sure what the 'regression' report is against, as
> there's only a single kernel version mentioned: 3.4, and that's almost a
upstream has the same problem, I have mentioned before.
> dinosaur.
next prev parent reply other threads:[~2014-05-27 12:56 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-26 3:04 balance storm Libo Chen
2014-05-26 5:11 ` Mike Galbraith
2014-05-26 12:16 ` Libo Chen
2014-05-26 14:19 ` Mike Galbraith
2014-05-27 7:56 ` Libo Chen
2014-05-27 9:55 ` Mike Galbraith
2014-05-27 12:50 ` Libo Chen
2014-05-27 13:20 ` Mike Galbraith
2014-05-28 1:04 ` Libo Chen
2014-05-28 1:53 ` Mike Galbraith
2014-05-28 6:54 ` Libo Chen
2014-05-28 8:16 ` Mike Galbraith
2014-05-28 9:08 ` Thomas Gleixner
2014-05-28 10:30 ` Peter Zijlstra
2014-05-28 10:52 ` Borislav Petkov
2014-05-28 11:43 ` Libo Chen
2014-05-28 11:55 ` Mike Galbraith
2014-05-29 7:58 ` Libo Chen
2014-05-29 7:57 ` Libo Chen
2014-05-27 20:53 ` Thomas Gleixner
2014-05-28 1:06 ` Libo Chen
2014-05-26 7:56 ` Mike Galbraith
2014-05-26 11:49 ` Libo Chen
2014-05-26 14:03 ` Mike Galbraith
2014-05-27 7:44 ` Libo Chen
2014-05-27 8:12 ` Mike Galbraith
2014-05-27 9:48 ` Peter Zijlstra
2014-05-27 10:05 ` Mike Galbraith
2014-05-27 10:43 ` Peter Zijlstra
2014-05-27 10:55 ` Mike Galbraith
2014-05-27 12:56 ` Libo Chen
2014-05-27 12:55 ` Libo Chen [this message]
2014-05-27 13:13 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53848B38.5090408@huawei.com \
--to=libo.chen@huawei.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizefan@huawei.com \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox