public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Mike Galbraith <efault@gmx.de>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
	linux-kernel@vger.kernel.org, linaro-dev@lists.linaro.org,
	mingo@redhat.com
Subject: Re: [RFC] sched: nohz_idle_balance
Date: Thu, 13 Sep 2012 10:19:41 +0200	[thread overview]
Message-ID: <1347524381.15764.100.camel@twins> (raw)
In-Reply-To: <1347518991.6821.45.camel@marge.simpson.net>

On Thu, 2012-09-13 at 08:49 +0200, Mike Galbraith wrote:
> On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote: 
> > On tickless system, one CPU runs load balance for all idle CPUs.
> > The cpu_load of this CPU is updated before starting the load balance
> > of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
> > 
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >  kernel/sched/fair.c |   11 ++++++-----
> >  1 file changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1ca4fe4..9ae3a5b 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle)
> >  		if (need_resched())
> >  			break;
> >  
> > -		raw_spin_lock_irq(&this_rq->lock);
> > -		update_rq_clock(this_rq);
> > -		update_idle_cpu_load(this_rq);
> > -		raw_spin_unlock_irq(&this_rq->lock);
> > +		rq = cpu_rq(balance_cpu);
> > +
> > +		raw_spin_lock_irq(&rq->lock);
> > +		update_rq_clock(rq);
> > +		update_idle_cpu_load(rq);
> > +		raw_spin_unlock_irq(&rq->lock);
> >  
> >  		rebalance_domains(balance_cpu, CPU_IDLE);
> >  
> > -		rq = cpu_rq(balance_cpu);
> >  		if (time_after(this_rq->next_balance, rq->next_balance))
> >  			this_rq->next_balance = rq->next_balance;
> >  	}
> 
> Ew, banging locks and updating clocks to what good end?

Well, updating the load statistics on the cpu you're going to balance
seems like a good end to me.. ;-) No point updating the local statistics
N times and leaving the ones you're going to balance stale for god knows
how long.

  reply	other threads:[~2012-09-13  8:19 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-13  4:11 [RFC] sched: nohz_idle_balance Vincent Guittot
2012-09-13  6:49 ` Mike Galbraith
2012-09-13  8:19   ` Peter Zijlstra [this message]
2012-09-13 10:29     ` Mike Galbraith
2012-09-13 13:41     ` Rakib Mullick
     [not found]   ` <CAKfTPtBQ7Y1xGOe9NZw8AhNbOzGgkVMgYyXjVa1d308kdG6bfQ@mail.gmail.com>
     [not found]     ` <1347521382.6821.61.camel@marge.simpson.net>
     [not found]       ` <CAKfTPtCvg1qxUv02-dO9qD2HiFzS_bA2Gr0mrN=8LEA3eXA7Bg@mail.gmail.com>
     [not found]         ` <1347522994.6821.67.camel@marge.simpson.net>
2012-09-13  8:37           ` Vincent Guittot
2012-09-13  8:45             ` Peter Zijlstra
2012-09-13  8:53               ` Peter Zijlstra
2012-09-13  8:18 ` Peter Zijlstra
2012-09-14  6:12 ` [tip:sched/core] sched: Fix nohz_idle_balance() tip-bot for Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1347524381.15764.100.camel@twins \
    --to=peterz@infradead.org \
    --cc=efault@gmx.de \
    --cc=linaro-dev@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox