public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Morten Rasmussen <morten.rasmussen@arm.com>
To: Alex Shi <alex.shi@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"vincent.guittot@linaro.org" <vincent.guittot@linaro.org>,
	"daniel.lezcano@linaro.org" <daniel.lezcano@linaro.org>,
	"fweisbec@gmail.com" <fweisbec@gmail.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"tony.luck@intel.com" <tony.luck@intel.com>,
	"fenghua.yu@intel.com" <fenghua.yu@intel.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"arjan@linux.intel.com" <arjan@linux.intel.com>,
	"pjt@google.com" <pjt@google.com>,
	"fengguang.wu@intel.com" <fengguang.wu@intel.com>,
	"james.hogan@imgtec.com" <james.hogan@imgtec.com>,
	"jason.low2@hp.com" <jason.low2@hp.com>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	"hanjun.guo@linaro.org" <hanjun.guo@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
Date: Fri, 20 Dec 2013 11:19:26 +0000	[thread overview]
Message-ID: <20131220111926.GA11605@e103034-lin> (raw)
In-Reply-To: <52B2F5D0.2050707@linaro.org>

On Thu, Dec 19, 2013 at 01:34:08PM +0000, Alex Shi wrote:
> On 12/17/2013 11:38 PM, Peter Zijlstra wrote:
> > On Tue, Dec 17, 2013 at 02:10:12PM +0000, Morten Rasmussen wrote:
> >>> @@ -4135,7 +4141,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
> >>>  			if (local_group)
> >>>  				load = source_load(i);
> >>>  			else
> >>> -				load = target_load(i);
> >>> +				load = target_load(i, sd->imbalance_pct);
> >>
> >> Don't you apply imbalance_pct twice here? Later on in
> >> find_idlest_group() you have:
> >>
> >> 	if (!idlest || 100*this_load < imbalance*min_load)
> >> 		return NULL;
> >>
> >> where min_load comes from target_load().
> > 
> > Yes! exactly! this doesn't make any sense.
> 
> Thanks a lot for review and comments!
> 
> I changed the patch to following shape. and push it under Fengguang's testing 
> system monitor. Any testing are appreciated!
> 
> BTW, Seems lots of changes in scheduler come from kinds of scenarios/benchmarks
> experience. But I still like to take any theoretical comments/suggestions.
> 
> -- 
> Thanks
>     Alex
> 
> ===
> 
> From 5cd67d975001edafe2ee820e0be5d86881a23bd6 Mon Sep 17 00:00:00 2001
> From: Alex Shi <alex.shi@linaro.org>
> Date: Sat, 23 Nov 2013 23:18:09 +0800
> Subject: [PATCH 4/4] sched: bias to target cpu load to reduce task moving
> 
> Task migration happens when target just a bit less then source cpu load.
> To reduce such situation happens, aggravate the target cpu load with
> sd->imbalance_pct/100 in wake_affine.
> 
> In find_idlest/busiest_group, change the aggravate to local cpu only
> from old group aggravation.
> 
> on my pandaboard ES.
> 
> 	latest kernel 527d1511310a89		+ whole patchset
> hackbench -T -g 10 -f 40
> 	23.25"					21.99"
> 	23.16"					21.20"
> 	24.24"					21.89"
> hackbench -p -g 10 -f 40
> 	26.52"					21.46"
> 	23.89"					22.96"
> 	25.65"					22.73"
> hackbench -P -g 10 -f 40
> 	20.14"					19.72"
> 	19.96"					19.10"
> 	21.76"					20.03"
> 
> Signed-off-by: Alex Shi <alex.shi@linaro.org>
> ---
>  kernel/sched/fair.c | 35 ++++++++++++++++-------------------
>  1 file changed, 16 insertions(+), 19 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bccdd89..3623ba4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -978,7 +978,7 @@ static inline unsigned long group_weight(struct task_struct *p, int nid)
>  
>  static unsigned long weighted_cpuload(const int cpu);
>  static unsigned long source_load(int cpu);
> -static unsigned long target_load(int cpu);
> +static unsigned long target_load(int cpu, int imbalance_pct);
>  static unsigned long power_of(int cpu);
>  static long effective_load(struct task_group *tg, int cpu, long wl, long wg);
>  
> @@ -3809,11 +3809,17 @@ static unsigned long source_load(int cpu)
>   * Return a high guess at the load of a migration-target cpu weighted
>   * according to the scheduling class and "nice" value.
>   */
> -static unsigned long target_load(int cpu)
> +static unsigned long target_load(int cpu, int imbalance_pct)
>  {
>  	struct rq *rq = cpu_rq(cpu);
>  	unsigned long total = weighted_cpuload(cpu);
>  
> +	/*
> +	 * without cpu_load decay, in most of time cpu_load is same as total
> +	 * so we need to make target a bit heavier to reduce task migration
> +	 */
> +	total = total * imbalance_pct / 100;
> +
>  	if (!sched_feat(LB_BIAS))
>  		return total;
>  
> @@ -4033,7 +4039,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  	this_cpu  = smp_processor_id();
>  	prev_cpu  = task_cpu(p);
>  	load	  = source_load(prev_cpu);
> -	this_load = target_load(this_cpu);
> +	this_load = target_load(this_cpu, 100);
>  
>  	/*
>  	 * If sync wakeup then subtract the (maximum possible)
> @@ -4089,7 +4095,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  
>  	if (balanced ||
>  	    (this_load <= load &&
> -	     this_load + target_load(prev_cpu) <= tl_per_task)) {
> +	     this_load + target_load(prev_cpu, 100) <= tl_per_task)) {
>  		/*
>  		 * This domain has SD_WAKE_AFFINE and
>  		 * p is cache cold in this domain, and
> @@ -4112,7 +4118,6 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>  {
>  	struct sched_group *idlest = NULL, *group = sd->groups;
>  	unsigned long min_load = ULONG_MAX, this_load = 0;
> -	int imbalance = 100 + (sd->imbalance_pct-100)/2;
>  
>  	do {
>  		unsigned long load, avg_load;
> @@ -4132,10 +4137,10 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>  
>  		for_each_cpu(i, sched_group_cpus(group)) {
>  			/* Bias balancing toward cpus of our domain */
> -			if (local_group)
> +			if (i == this_cpu)

What is the motivation for changing the local_group load calculation?
Now the load contributions of all cpus in the local group, except
this_cpu, will contribute more as their contribution (this_load) is
determined using target_load() instead.

If I'm not mistaken, that will lead to more frequent load balancing as
the local_group bias has been reduced. That is the opposite of your
intentions based on your comment in target_load().

>  				load = source_load(i);
>  			else
> -				load = target_load(i);
> +				load = target_load(i, sd->imbalance_pct);

You scale by sd->imbalance_pct instead of 100+(sd->imbalance_pct-100)/2
that you removed above. sd->imbalance_pct may have been arbitrarily
chosen in the past, but changing it may affect behavior. 

>  
>  			avg_load += load;
>  		}
> @@ -4151,7 +4156,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>  		}
>  	} while (group = group->next, group != sd->groups);
>  
> -	if (!idlest || 100*this_load < imbalance*min_load)
> +	if (!idlest || this_load < min_load)
>  		return NULL;
>  	return idlest;
>  }
> @@ -5476,9 +5481,9 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>  
>  		nr_running = rq->nr_running;
>  
> -		/* Bias balancing toward cpus of our domain */
> -		if (local_group)
> -			load = target_load(i);
> +		/* Bias balancing toward dst cpu */
> +		if (env->dst_cpu == i)
> +			load = target_load(i, env->sd->imbalance_pct);

Here you do the same group load bias change as above.

>  		else
>  			load = source_load(i);
>  
> @@ -5918,14 +5923,6 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>  		if ((local->idle_cpus < busiest->idle_cpus) &&
>  		    busiest->sum_nr_running <= busiest->group_weight)
>  			goto out_balanced;
> -	} else {
> -		/*
> -		 * In the CPU_NEWLY_IDLE, CPU_NOT_IDLE cases, use
> -		 * imbalance_pct to be conservative.
> -		 */
> -		if (100 * busiest->avg_load <=
> -				env->sd->imbalance_pct * local->avg_load)
> -			goto out_balanced;
>  	}
>  
>  force_balance:

As said my previous replies to this series, I think this problem should
be solved by fixing the cause of the problem, that is the cpu_load
calculation, instead of biasing the cpu_load where-ever it is used to
hide the problem.

Doing a bit of git archaeology reveals that the cpu_load code goes back
to 7897986bad8f6cd50d6149345aca7f6480f49464 and that in the original
patch x86 was using *_idx > 0 for all sched_domain levels except SMT. In
my opinion that made logical sense. If we are about to change to *_idx=0
we are removing the main idea behind that code and there needs to be a
new one. Otherwise, cpu_load doesn't make sense.

Morten

  reply	other threads:[~2013-12-20 11:19 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-03  9:05 [PATCH 0/4] sched: remove cpu_load decay Alex Shi
2013-12-03  9:05 ` [PATCH 1/4] sched: shortcut to remove load_idx Alex Shi
2013-12-03  9:05 ` [PATCH 2/4] sched: remove rq->cpu_load[load_idx] array Alex Shi
2013-12-03  9:05 ` [PATCH 3/4] sched: clean up cpu_load update Alex Shi
2013-12-03  9:05 ` [PATCH 4/4] sched: bias to target cpu load to reduce task moving Alex Shi
2013-12-04  9:06   ` Yuanhan Liu
2013-12-04 11:25     ` Alex Shi
2013-12-17 14:10   ` Morten Rasmussen
2013-12-17 15:38     ` Peter Zijlstra
2013-12-19 13:34       ` Alex Shi
2013-12-20 11:19         ` Morten Rasmussen [this message]
2013-12-20 14:45           ` Alex Shi
2013-12-25 14:58           ` Alex Shi
2014-01-02 16:04             ` Morten Rasmussen
2014-01-06 13:35               ` Alex Shi
2014-01-07 12:55                 ` Morten Rasmussen
2014-01-07 12:59                   ` Peter Zijlstra
2014-01-07 13:15                     ` Peter Zijlstra
2014-01-07 13:32                       ` Vincent Guittot
2014-01-07 13:40                         ` Peter Zijlstra
2014-01-07 15:16                       ` Morten Rasmussen
2014-01-07 20:37                         ` Peter Zijlstra
2014-01-08 14:15                     ` Alex Shi
2013-12-03 10:26 ` [PATCH 0/4] sched: remove cpu_load decay Peter Zijlstra
2013-12-10  1:04   ` Alex Shi
2013-12-10  1:06     ` Paul Turner
2013-12-13 19:50     ` bsegall
2013-12-14 12:53       ` Alex Shi
2013-12-13 20:03 ` Peter Zijlstra
2013-12-14 13:27   ` Alex Shi
2013-12-17 14:04     ` Morten Rasmussen
2013-12-17 15:37       ` Peter Zijlstra
2013-12-17 18:12         ` Morten Rasmussen
2013-12-20 14:43           ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131220111926.GA11605@e103034-lin \
    --to=morten.rasmussen@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linaro.org \
    --cc=arjan@linux.intel.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=fengguang.wu@intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=fweisbec@gmail.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hanjun.guo@linaro.org \
    --cc=james.hogan@imgtec.com \
    --cc=jason.low2@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox