From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751416AbdEBIaU (ORCPT ); Tue, 2 May 2017 04:30:20 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:49212 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750955AbdEBIaQ (ORCPT ); Tue, 2 May 2017 04:30:16 -0400 Date: Tue, 2 May 2017 10:30:09 +0200 From: Peter Zijlstra To: Tejun Heo Cc: Vincent Guittot , Ingo Molnar , linux-kernel , Linus Torvalds , Mike Galbraith , Paul Turner , Chris Mason , kernel-team@fb.com Subject: Re: [PATCH 1/2] sched/fair: Fix how load gets propagated from cfs_rq to its sched_entity Message-ID: <20170502083009.GA3377@worktop.programming.kicks-ass.net> References: <20170424201344.GA14169@wtj.duckdns.org> <20170424201415.GB14169@wtj.duckdns.org> <20170425181219.GA15593@wtj.duckdns.org> <20170426165123.GA17921@linaro.org> <20170501141733.shphf35psasefraj@hirez.programming.kicks-ass.net> <20170501215604.GB19079@htj.duckdns.org> <20170502081905.GA4626@worktop.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170502081905.GA4626@worktop.programming.kicks-ass.net> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 02, 2017 at 10:19:05AM +0200, Peter Zijlstra wrote: > Can you have a play with something like the below? I suspect > 'shares_runnable' might work for you here. > --- > kernel/sched/fair.c | 62 +++++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 44 insertions(+), 18 deletions(-) So something like so on top I suppose... --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3101,38 +3101,7 @@ static inline void update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct cfs_rq *gcfs_rq = group_cfs_rq(se); - long delta, load = gcfs_rq->avg.load_avg; - - /* - * If the load of group cfs_rq is null, the load of the - * sched_entity will also be null so we can skip the formula - */ - if (load) { - long tg_load; - - /* Get tg's load and ensure tg_load > 0 */ - tg_load = atomic_long_read(&gcfs_rq->tg->load_avg) + 1; - - /* Ensure tg_load >= load and updated with current load*/ - tg_load -= gcfs_rq->tg_load_avg_contrib; - tg_load += load; - - /* - * We need to compute a correction term in the case that the - * task group is consuming more CPU than a task of equal - * weight. A task with a weight equals to tg->shares will have - * a load less or equal to scale_load_down(tg->shares). - * Similarly, the sched_entities that represent the task group - * at parent level, can't have a load higher than - * scale_load_down(tg->shares). And the Sum of sched_entities' - * load must be <= scale_load_down(tg->shares). - */ - if (tg_load > scale_load_down(gcfs_rq->tg->shares)) { - /* scale gcfs_rq's load into tg's shares*/ - load *= scale_load_down(gcfs_rq->tg->shares); - load /= tg_load; - } - } + long delta, load = calc_cfs_shares(gcfs_rq, gcfs_rq->tg, shares_runnable); delta = load - se->avg.load_avg;