From: Gautham R Shenoy <ego@in.ibm.com>
To: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>,
linux-kernel@vger.kernel.org,
Suresh Siddha <suresh.b.siddha@intel.com>,
Balbir Singh <balbir@in.ibm.com>
Subject: Re: [PATCH v2 1/2] sched: Nominate idle load balancer from a semi-idle package.
Date: Fri, 3 Apr 2009 20:41:43 +0530 [thread overview]
Message-ID: <20090403151143.GA7641@in.ibm.com> (raw)
In-Reply-To: <87hc16kyk5.fsf@basil.nowhere.org>
Hi Andi,
Thanks for the review.
On Fri, Apr 03, 2009 at 09:04:42AM +0200, Andi Kleen wrote:
> Gautham R Shenoy <ego@in.ibm.com> writes:
> >
> > Improve the algorithm to nominate the idle load balancer from a semi idle
> > cores/packages thereby increasing the probability of the cores/packages being
> > in deeper sleep states for longer duration.
>
> The basic patch looks good.
>
> In theory you could also look for a nearby nohz balancer in the end
> to optimize traffic on the interconnect of a larger NUMA system,
> but it's probably not worth it.
The algorithm does this already, since it starts off with it's own
sched_group in the power-aware sched_domain, and moves to it's
sibling-groups. The sibling groups are linked in the order of
their proximity.
>
> >
> > The algorithm is activated only when sched_mc/smt_power_savings != 0.
>
> But it seems to me that this check could be dropped and doing it
> unconditionally, because idle balancing doesn't need much memory
> bandwith or cpu power, so always putting it nearby is good.
Well, right now, a new idle load balancer is nominated when the current
idle load balancer picks up a task. At this point, if the user is
concerned about performance as opposed to energy savings, we wouldn't
want to iterate over the domain hierarchy to find the best idle load
balancer, would we ? Because that might cause latency in running the job
that is queued on our runqueue.
Actually this can be optimized. We can have the current idle-load
balancer nominate the ilb as the first_cpu(nohz._cpu_mask). And this
idle load balancer at the end of the sched_tick can see if there's a
more power-efficient idle load balancer.
Let me see if this gives any benefit over the patches that I've posted.
>
> -Ani
>
> --
> ak@linux.intel.com -- Speaking for myself only.
--
Thanks and Regards
gautham
next prev parent reply other threads:[~2009-04-03 15:12 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-04-02 12:38 [PATCH v2 0/2] sched: Nominate a power-efficient ILB Gautham R Shenoy
2009-04-02 12:38 ` [PATCH v2 1/2] sched: Nominate idle load balancer from a semi-idle package Gautham R Shenoy
2009-04-02 15:52 ` Jaswinder Singh Rajput
2009-04-03 14:59 ` Gautham R Shenoy
2009-04-03 15:14 ` Randy Dunlap
2009-04-03 7:04 ` Andi Kleen
2009-04-03 15:11 ` Gautham R Shenoy [this message]
2009-04-02 12:38 ` [PATCH v2 2/2] sched: Nominate a power-efficient ilb in select_nohz_balancer() Gautham R Shenoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090403151143.GA7641@in.ibm.com \
--to=ego@in.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=andi@firstfloor.org \
--cc=balbir@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=suresh.b.siddha@intel.com \
--cc=svaidy@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox