From: Jason Low <jason.low2@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
LKML <linux-kernel@vger.kernel.org>,
Mike Galbraith <efault@gmx.de>,
Thomas Gleixner <tglx@linutronix.de>,
Paul Turner <pjt@google.com>, Alex Shi <alex.shi@intel.com>,
Preeti U Murthy <preeti@linux.vnet.ibm.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Morten Rasmussen <morten.rasmussen@arm.com>,
Namhyung Kim <namhyung@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Kees Cook <keescook@chromium.org>, Mel Gorman <mgorman@suse.de>,
Rik van Riel <riel@redhat.com>,
aswin@hp.com, scott.norton@hp.com, chegu_vinod@hp.com
Subject: Re: [RFC] sched: Limit idle_balance() when it is being used too frequently
Date: Wed, 17 Jul 2013 08:59:01 -0700 [thread overview]
Message-ID: <1374076741.7412.35.camel@j-VirtualBox> (raw)
In-Reply-To: <20130717093913.GP23818@dyad.programming.kicks-ass.net>
Hi Peter,
On Wed, 2013-07-17 at 11:39 +0200, Peter Zijlstra wrote:
> On Wed, Jul 17, 2013 at 01:11:41AM -0700, Jason Low wrote:
> > For the more complex model, are you suggesting that each completion time
> > is the time it takes to complete 1 iteration of the for_each_domain()
> > loop?
>
> Per sd, yes? So higher domains (or lower depending on how you model the thing
> in you head) have bigger CPU spans, and thus take longer to complete. Imagine
> the top domain of a 4096 cpu system, it would go look at all cpus to see if it
> could find a task.
>
> > Based on some of the data I collected, a single iteration of the
> > for_each_domain() loop is almost always significantly lower than the
> > approximate CPU idle time, even in workloads where idle_balance is
> > lowering performance. The bigger issue is that it takes so many of these
> > attempts before idle_balance actually "worked" and pulls a tasks.
>
> I'm confused, so:
>
> schedule()
> if (!rq->nr_running)
> idle_balance()
> for_each_domain(sd)
> load_balance(sd)
>
> is the entire thing, there's no other loop in there.
So if we have the following:
for_each_domain(sd)
before = sched_clock_cpu
load_balance(sd)
after = sched_clock_cpu
idle_balance_completion_time = after - before
At this point, the "idle_balance_completion_time" is usually a very
small value and is usually a lot smaller than the avg CPU idle time.
However, the vast majority of the time, load_balance returns 0.
> > I initially was thinking about each "completion time" of an idle balance
> > as the sum total of the times of all iterations to complete until a task
> > is successfully pulled within each domain.
>
> So you're saying that normally idle_balance() won't find a task to pull? And we
> need many times going newidle before we do get something?
Yes, a while ago, I collected some data on the rate in which
idle_balance() does not pull tasks, and it was a very high number.
> Wouldn't this mean that there simply weren't enough tasks to keep all cpus busy?
If I remember correctly, in a lot of those load_balance attempts when
the machine is under a high Java load, there were no "imbalance" between
the groups in each sched_domain.
> If there were tasks we could've pulled, we might need to look at why they
> weren't and maybe fix that. Now it could be that it things this cpu, even with
> the (little) idle time it has is sufficiently loaded and we'll get a 'local'
> wakeup soon enough. That's perfectly fine.
>
> What we should avoid is spending more time looking for tasks then we have idle,
> since that reduces the total time we can spend doing useful work. So that is I
> think the critical cut-off point.
Do you think its worth a try to consider each newidle balance attempt as
the total load_balance attempts until it is able to move a task, and
then skip balancing within the domain if a CPU's avg idle time is less
than that avg time doing newidle balance?
Thanks,
Jason
next prev parent reply other threads:[~2013-07-17 15:59 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-16 19:21 [RFC] sched: Limit idle_balance() when it is being used too frequently Jason Low
2013-07-16 19:27 ` Rik van Riel
2013-07-16 20:20 ` Peter Zijlstra
2013-07-16 22:48 ` Jason Low
2013-07-17 7:25 ` Peter Zijlstra
2013-07-17 7:48 ` Peter Zijlstra
2013-07-17 8:11 ` Jason Low
2013-07-17 9:39 ` Peter Zijlstra
2013-07-17 15:59 ` Jason Low [this message]
2013-07-17 16:18 ` Peter Zijlstra
2013-07-17 17:51 ` Rik van Riel
2013-07-17 18:01 ` Peter Zijlstra
2013-07-17 18:48 ` Jason Low
2013-07-18 4:02 ` Jason Low
2013-07-18 9:32 ` Peter Zijlstra
2013-07-18 11:59 ` Rik van Riel
2013-07-18 12:15 ` Srikar Dronamraju
2013-07-18 12:35 ` Peter Zijlstra
2013-07-18 13:06 ` Srikar Dronamraju
2013-07-18 19:06 ` Jason Low
2013-07-19 18:37 ` Peter Zijlstra
2013-07-19 19:15 ` Jason Low
2013-07-18 12:12 ` Srikar Dronamraju
2013-07-18 19:03 ` Jason Low
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1374076741.7412.35.camel@j-VirtualBox \
--to=jason.low2@hp.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@intel.com \
--cc=aswin@hp.com \
--cc=chegu_vinod@hp.com \
--cc=efault@gmx.de \
--cc=keescook@chromium.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=preeti@linux.vnet.ibm.com \
--cc=riel@redhat.com \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).