public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Venkatesh Pallipadi <venki@google.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Gautham R Shenoy <ego@in.ibm.com>,
	Joel Schopp <jschopp@austin.ibm.com>,
	Suresh B Siddha <suresh.b.siddha@intel.com>
Subject: Re: [PATCH] sched: Call update_group_power only for local_group
Date: Fri, 02 Jul 2010 10:05:44 +0200	[thread overview]
Message-ID: <1278057944.32034.15085.camel@twins> (raw)
In-Reply-To: <1278025942-6750-1-git-send-email-venki@google.com>

On Thu, 2010-07-01 at 16:12 -0700, Venkatesh Pallipadi wrote:
> commit 871e35b moved update_group_power() call in update_sg_lb_stats(),
> resulting in it being called for each group, even though it only updates
> the power of local group. As a result we have frequent redundant
> update_group_power() calls.
> 
> Move it back under "if (local_group)" condition.
> 
> This reduces the number of calls to update_group_power by a factor of 4
> on my 4 core in 4 NUMA nodes test system.

Hrm,.. so Gautham removed that because for things like the NO_HZ
balancer the initial balance_cpu == this_cpu constraint doesn't hold.

Not I don't think the local_group constraint holds for that either, so
the below would again break that..

Should we perhaps have a conditional on this_rq->nohz_balance_kick or
so?

> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -2359,8 +2359,11 @@ static inline void update_sg_lb_stats(struct sched_domain *sd,
>  	unsigned int balance_cpu = -1, first_idle_cpu = 0;
>  	unsigned long avg_load_per_task = 0;
>  
> -	if (local_group)
> +	if (local_group) {
>  		balance_cpu = group_first_cpu(group);
> +		update_group_power(sd, this_cpu);
> +	}
> +
>  
>  	/* Tally up the load of all CPUs in the group */
>  	max_cpu_load = 0;


  reply	other threads:[~2010-07-02  8:06 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-01 23:12 [PATCH] sched: Call update_group_power only for local_group Venkatesh Pallipadi
2010-07-02  8:05 ` Peter Zijlstra [this message]
2010-07-02 16:20   ` Venkatesh Pallipadi
2010-07-02 16:40     ` Peter Zijlstra
2010-07-02 16:56       ` Venkatesh Pallipadi
2010-07-02 17:31         ` Peter Zijlstra
2010-07-08 14:12         ` Peter Zijlstra
2010-07-08 17:45           ` Suresh Siddha
2010-07-08 17:49             ` Peter Zijlstra
2010-07-08 17:50               ` Suresh Siddha
2010-07-08 17:55                 ` Peter Zijlstra
2010-07-08 18:16             ` Peter Zijlstra
2010-07-08 21:53               ` Suresh Siddha
2010-07-09 13:17                 ` Peter Zijlstra
2010-07-17 11:12                 ` [tip:sched/core] sched: Update rq->clock for nohz balanced cpus tip-bot for Suresh Siddha
2010-07-12 17:11               ` [PATCH] sched: Call update_group_power only for local_group Venkatesh Pallipadi
2010-07-17 11:12               ` [tip:sched/core] sched: Reduce update_group_power() calls tip-bot for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1278057944.32034.15085.camel@twins \
    --to=peterz@infradead.org \
    --cc=ego@in.ibm.com \
    --cc=jschopp@austin.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=suresh.b.siddha@intel.com \
    --cc=venki@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox