From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e23smtp06.au.ibm.com (e23smtp06.au.ibm.com [202.81.31.148]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "e23smtp06.au.ibm.com", Issuer "GeoTrust SSL CA" (not verified)) by ozlabs.org (Postfix) with ESMTPS id 7A6E92C00F9 for ; Thu, 24 Oct 2013 15:07:13 +1100 (EST) Received: from /spool/local by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 24 Oct 2013 14:07:12 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 154F63578050 for ; Thu, 24 Oct 2013 15:07:05 +1100 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r9O3neuT64749818 for ; Thu, 24 Oct 2013 14:49:41 +1100 Received: from d23av01.au.ibm.com (localhost [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id r9O4737F013094 for ; Thu, 24 Oct 2013 15:07:03 +1100 Message-ID: <52689C38.5010108@linux.vnet.ibm.com> Date: Thu, 24 Oct 2013 09:34:08 +0530 From: Preeti U Murthy MIME-Version: 1.0 To: Peter Zijlstra Subject: Re: [PATCH 3/3] sched: Aggressive balance in domains whose groups share package resources References: <20131021114002.13291.31478.stgit@drishya> <20131021114502.13291.60794.stgit@drishya> <20131022222326.GL2490@laptop.programming.kicks-ass.net> In-Reply-To: <20131022222326.GL2490@laptop.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1 Cc: Michael Neuling , Mike Galbraith , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Anton Blanchard , Paul Turner , Ingo Molnar List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Hi Peter, On 10/23/2013 03:53 AM, Peter Zijlstra wrote: > On Mon, Oct 21, 2013 at 05:15:02PM +0530, Vaidyanathan Srinivasan wrote: >> kernel/sched/fair.c | 18 ++++++++++++++++++ >> 1 file changed, 18 insertions(+) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index 828ed97..bbcd96b 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -5165,6 +5165,8 @@ static int load_balance(int this_cpu, struct rq *this_rq, >> { >> int ld_moved, cur_ld_moved, active_balance = 0; >> struct sched_group *group; >> + struct sched_domain *child; >> + int share_pkg_res = 0; >> struct rq *busiest; >> unsigned long flags; >> struct cpumask *cpus = __get_cpu_var(load_balance_mask); >> @@ -5190,6 +5192,10 @@ static int load_balance(int this_cpu, struct rq *this_rq, >> >> schedstat_inc(sd, lb_count[idle]); >> >> + child = sd->child; >> + if (child && child->flags & SD_SHARE_PKG_RESOURCES) >> + share_pkg_res = 1; >> + >> redo: >> if (!should_we_balance(&env)) { >> *continue_balancing = 0; >> @@ -5202,6 +5208,7 @@ redo: >> goto out_balanced; >> } >> >> +redo_grp: >> busiest = find_busiest_queue(&env, group); >> if (!busiest) { >> schedstat_inc(sd, lb_nobusyq[idle]); >> @@ -5292,6 +5299,11 @@ more_balance: >> if (!cpumask_empty(cpus)) { >> env.loop = 0; >> env.loop_break = sched_nr_migrate_break; >> + if (share_pkg_res && >> + cpumask_intersects(cpus, >> + to_cpumask(group->cpumask))) > > sched_group_cpus() > >> + goto redo_grp; >> + >> goto redo; >> } >> goto out_balanced; >> @@ -5318,9 +5330,15 @@ more_balance: >> */ >> if (!cpumask_test_cpu(this_cpu, >> tsk_cpus_allowed(busiest->curr))) { >> + cpumask_clear_cpu(cpu_of(busiest), cpus); >> raw_spin_unlock_irqrestore(&busiest->lock, >> flags); >> env.flags |= LBF_ALL_PINNED; >> + if (share_pkg_res && >> + cpumask_intersects(cpus, >> + to_cpumask(group->cpumask))) >> + goto redo_grp; >> + >> goto out_one_pinned; >> } > > Man this retry logic is getting annoying.. isn't there anything saner we > can do? Let me give this a thought and get back. Regards Preeti U Murthy >