From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754328AbbBTL1w (ORCPT ); Fri, 20 Feb 2015 06:27:52 -0500 Received: from casper.infradead.org ([85.118.1.10]:51149 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754051AbbBTL1u (ORCPT ); Fri, 20 Feb 2015 06:27:50 -0500 Date: Fri, 20 Feb 2015 12:27:43 +0100 From: Peter Zijlstra To: Vincent Guittot Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, Morten.Rasmussen@arm.com, kamalesh@linux.vnet.ibm.com, riel@redhat.com, efault@gmx.de, nicolas.pitre@linaro.org, dietmar.eggemann@arm.com, linaro-kernel@lists.linaro.org Subject: Re: [PATCH RESEND v9 10/10] sched: move cfs task on a CPU with higher capacity Message-ID: <20150220112743.GN5029@twins.programming.kicks-ass.net> References: <1421316570-23097-1-git-send-email-vincent.guittot@linaro.org> <1421316570-23097-11-git-send-email-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1421316570-23097-11-git-send-email-vincent.guittot@linaro.org> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 15, 2015 at 11:09:30AM +0100, Vincent Guittot wrote: > As a sidenote, this will note generate more spurious ilb because we already s/note/not/ > trig an ilb if there is more than 1 busy cpu. If this cpu is the only one that > has a task, we will trig the ilb once for migrating the task. > +static inline bool nohz_kick_needed(struct rq *rq) > { > unsigned long now = jiffies; > struct sched_domain *sd; > struct sched_group_capacity *sgc; > int nr_busy, cpu = rq->cpu; > + bool kick = false; > > if (unlikely(rq->idle_balance)) > + return false; > > /* > * We may be recently in ticked or tickless idle mode. At the first > @@ -7472,38 +7498,44 @@ static inline int nohz_kick_needed(struct rq *rq) > * balancing. > */ > if (likely(!atomic_read(&nohz.nr_cpus))) > + return false; > > if (time_before(now, nohz.next_balance)) > + return false; > > if (rq->nr_running >= 2) > + return true; So this, > rcu_read_lock(); > sd = rcu_dereference(per_cpu(sd_busy, cpu)); > if (sd) { > sgc = sd->groups->sgc; > nr_busy = atomic_read(&sgc->nr_busy_cpus); > > + if (nr_busy > 1) { > + kick = true; > + goto unlock; > + } > + > } > > + sd = rcu_dereference(rq->sd); > + if (sd) { > + if ((rq->cfs.h_nr_running >= 1) && > + check_cpu_capacity(rq, sd)) { > + kick = true; > + goto unlock; > + } > + } vs this: how would we ever get here? If h_nr_running > 1, must then not nr_running > 1 as well? > > + sd = rcu_dereference(per_cpu(sd_asym, cpu)); > if (sd && (cpumask_first_and(nohz.idle_cpus_mask, > sched_domain_span(sd)) < cpu)) > + kick = true; For consistencies sake I would've added a goto unlock here as well. > +unlock: > rcu_read_unlock(); > + return kick; > }