public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Low <jason.low2@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@kernel.org, linux-kernel@vger.kernel.org,
	daniel.lezcano@linaro.org, alex.shi@linaro.org,
	preeti@linux.vnet.ibm.com, efault@gmx.de,
	vincent.guittot@linaro.org, morten.rasmussen@arm.com,
	aswin@hp.com, chegu_vinod@hp.com
Subject: Re: [PATCH 3/3] sched, fair: Stop searching for tasks in newidle balance if there are runnable tasks
Date: Thu, 24 Apr 2014 09:43:09 -0700	[thread overview]
Message-ID: <1398357789.3509.6.camel@j-VirtualBox> (raw)
In-Reply-To: <20140424071541.GZ26782@laptop.programming.kicks-ass.net>

On Thu, 2014-04-24 at 09:15 +0200, Peter Zijlstra wrote:
> On Wed, Apr 23, 2014 at 06:30:35PM -0700, Jason Low wrote:
> > It was found that when running some workloads (such as AIM7) on large systems
> > with many cores, CPUs do not remain idle for long. Thus, tasks can
> > wake/get enqueued while doing idle balancing.
> > 
> > In this patch, while traversing the domains in idle balance, in addition to
> > checking for pulled_task, we add an extra check for this_rq->nr_running for
> > determining if we should stop searching for tasks to pull. If there are
> > runnable tasks on this rq, then we will stop traversing the domains. This
> > reduces the chance that idle balance delays a task from running.
> > 
> > This patch resulted in approximately a 6% performance improvement when
> > running a Java Server workload on an 8 socket machine.
> > 
> > Signed-off-by: Jason Low <jason.low2@hp.com>
> > ---
> >  kernel/sched/fair.c |    8 ++++++--
> >  1 files changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 3e3ffb8..232518c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6689,7 +6689,6 @@ static int idle_balance(struct rq *this_rq)
> >  		if (sd->flags & SD_BALANCE_NEWIDLE) {
> >  			t0 = sched_clock_cpu(this_cpu);
> >  
> > -			/* If we've pulled tasks over stop searching: */
> >  			pulled_task = load_balance(this_cpu, this_rq,
> >  						   sd, CPU_NEWLY_IDLE,
> >  						   &continue_balancing);
> > @@ -6704,7 +6703,12 @@ static int idle_balance(struct rq *this_rq)
> >  		interval = msecs_to_jiffies(sd->balance_interval);
> >  		if (time_after(next_balance, sd->last_balance + interval))
> >  			next_balance = sd->last_balance + interval;
> > -		if (pulled_task)
> > +
> > +		/*
> > +		 * Stop searching for tasks to pull if there are
> > +		 * now runnable tasks on this rq.
> > +		 */
> > +		if (pulled_task || this_rq->nr_running > 0)
> >  			break;
> >  	}
> >  	rcu_read_unlock();
> 
> There's also the CONFIG_PREEMPT bit in move_tasks() does making that
> unconditional also help such a workload?

If the below patch is what you were referring to, I believe this
can help too. This was also something that I was testing out before
we went with those patches which compares avg_idle with idle balance
cost. I recall seeing somewhere around a +7% performance improvement
in at least least 1 of the AIM7 workloads. I can do some more testing
with this.

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 43232b8..d069054 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5304,7 +5304,6 @@ static int move_tasks(struct lb_env *env)
                pulled++;
                env->imbalance -= load;
 
-#ifdef CONFIG_PREEMPT
                /*
                 * NEWIDLE balancing is a source of latency, so preemptible
                 * kernels will stop after the first task is pulled to minimize
@@ -5312,7 +5311,6 @@ static int move_tasks(struct lb_env *env)
                 */
                if (env->idle == CPU_NEWLY_IDLE)
                        break;
-#endif
 
                /*
                 * We only want to steal up to the prescribed amount of



  reply	other threads:[~2014-04-24 16:43 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-24  1:30 [PATCH 0/3] sched: Idle balance patches Jason Low
2014-04-24  1:30 ` [PATCH 1/3] sched, balancing: Update rq->max_idle_balance_cost whenever newidle balance is attempted Jason Low
2014-04-24 10:14   ` Preeti U Murthy
2014-04-24 12:04     ` Peter Zijlstra
2014-04-24 12:44       ` Peter Zijlstra
2014-04-24 16:53         ` Jason Low
2014-04-24 17:14           ` Peter Zijlstra
2014-04-24 17:29             ` Peter Zijlstra
2014-04-24 22:18             ` Jason Low
2014-04-25  5:12               ` Preeti U Murthy
2014-04-25  7:13                 ` Jason Low
2014-04-25  7:58                   ` Mike Galbraith
2014-04-25 17:03                     ` Jason Low
2014-04-25  5:08             ` Preeti U Murthy
2014-04-25  9:43               ` Peter Zijlstra
2014-04-25 19:54                 ` Jason Low
2014-04-26 14:50                   ` Peter Zijlstra
2014-04-28 16:42                     ` Jason Low
2014-04-27  8:31                   ` Preeti Murthy
2014-04-28  9:24                     ` Peter Zijlstra
2014-04-29  3:10                       ` Preeti U Murthy
2014-04-28 18:04                     ` Jason Low
2014-04-29  3:52                       ` Preeti U Murthy
2014-04-24  1:30 ` [PATCH 2/3] sched: Initialize newidle balance stats in sd_numa_init() Jason Low
2014-04-24 12:18   ` Peter Zijlstra
2014-04-25  5:57   ` Preeti U Murthy
2014-05-08 10:42   ` [tip:sched/core] sched/numa: " tip-bot for Jason Low
2014-04-24  1:30 ` [PATCH 3/3] sched, fair: Stop searching for tasks in newidle balance if there are runnable tasks Jason Low
2014-04-24  2:51   ` Mike Galbraith
2014-04-24  8:28     ` Mike Galbraith
2014-04-24 16:37     ` Jason Low
2014-04-24 19:07       ` Mike Galbraith
2014-04-24  7:15   ` Peter Zijlstra
2014-04-24 16:43     ` Jason Low [this message]
2014-04-24 16:52       ` Peter Zijlstra
2014-04-25  1:24         ` Jason Low
2014-04-25  2:45         ` Mike Galbraith
2014-04-25  3:33           ` Jason Low
2014-04-25  5:46             ` Mike Galbraith
2014-04-24 16:54       ` Peter Zijlstra
2014-04-24 10:30   ` Morten Rasmussen
2014-04-24 11:32     ` Peter Zijlstra
2014-04-24 14:08       ` Morten Rasmussen
2014-04-24 14:59         ` Peter Zijlstra
2014-05-08 10:44   ` [tip:sched/core] sched/fair: " tip-bot for Jason Low

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1398357789.3509.6.camel@j-VirtualBox \
    --to=jason.low2@hp.com \
    --cc=alex.shi@linaro.org \
    --cc=aswin@hp.com \
    --cc=chegu_vinod@hp.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=preeti@linux.vnet.ibm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox