From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2346CC61DA4 for ; Wed, 15 Mar 2023 12:24:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232420AbjCOMYk (ORCPT ); Wed, 15 Mar 2023 08:24:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232260AbjCOMYX (ORCPT ); Wed, 15 Mar 2023 08:24:23 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA6C290B69 for ; Wed, 15 Mar 2023 05:23:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7BA5161D5F for ; Wed, 15 Mar 2023 12:23:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9409DC4339B; Wed, 15 Mar 2023 12:23:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1678882995; bh=iUQ6gudsmn4FFsxRpNm7b2K7QFuzCEIKsfGUW2Hs0Ms=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gsKbZr+Y0Z5ND7Rl1JIY2jjNveu+QSYu8pE/yY2w1ubJhy6vLkYaMWY/OnX3spilq g158xnaYvbYWqkYaPtfEFG9VvvDenrgdubj1Pf90OhxSjF0DAo+fFQR5sCLqhn37Ay SDlGkk8btbk77sh0akvcPCyr0Y/2OxY781r/VgMc= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Qais Yousef , "Peter Zijlstra (Intel)" , "Qais Yousef (Google)" , Vincent Guittot Subject: [PATCH 5.10 081/104] sched/uclamp: Make task_fits_capacity() use util_fits_cpu() Date: Wed, 15 Mar 2023 13:12:52 +0100 Message-Id: <20230315115735.304750811@linuxfoundation.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230315115731.942692602@linuxfoundation.org> References: <20230315115731.942692602@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Qais Yousef commit b48e16a69792b5dc4a09d6807369d11b2970cc36 upstream. So that the new uclamp rules in regard to migration margin and capacity pressure are taken into account correctly. Fixes: a7008c07a568 ("sched/fair: Make task_fits_capacity() consider uclamp restrictions") Co-developed-by: Vincent Guittot Signed-off-by: Qais Yousef Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20220804143609.515789-3-qais.yousef@arm.com (cherry picked from commit b48e16a69792b5dc4a09d6807369d11b2970cc36) Signed-off-by: Qais Yousef (Google) Signed-off-by: Greg Kroah-Hartman --- kernel/sched/fair.c | 26 ++++++++++++++++---------- kernel/sched/sched.h | 9 +++++++++ 2 files changed, 25 insertions(+), 10 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4197,10 +4197,12 @@ static inline int util_fits_cpu(unsigned return fits; } -static inline int task_fits_capacity(struct task_struct *p, - unsigned long capacity) +static inline int task_fits_cpu(struct task_struct *p, int cpu) { - return fits_capacity(uclamp_task_util(p), capacity); + unsigned long uclamp_min = uclamp_eff_value(p, UCLAMP_MIN); + unsigned long uclamp_max = uclamp_eff_value(p, UCLAMP_MAX); + unsigned long util = task_util_est(p); + return util_fits_cpu(util, uclamp_min, uclamp_max, cpu); } static inline void update_misfit_status(struct task_struct *p, struct rq *rq) @@ -4213,7 +4215,7 @@ static inline void update_misfit_status( return; } - if (task_fits_capacity(p, capacity_of(cpu_of(rq)))) { + if (task_fits_cpu(p, cpu_of(rq))) { rq->misfit_task_load = 0; return; } @@ -7898,7 +7900,7 @@ static int detach_tasks(struct lb_env *e case migrate_misfit: /* This is not a misfit task */ - if (task_fits_capacity(p, capacity_of(env->src_cpu))) + if (task_fits_cpu(p, env->src_cpu)) goto next; env->imbalance = 0; @@ -8840,6 +8842,10 @@ static inline void update_sg_wakeup_stat memset(sgs, 0, sizeof(*sgs)); + /* Assume that task can't fit any CPU of the group */ + if (sd->flags & SD_ASYM_CPUCAPACITY) + sgs->group_misfit_task_load = 1; + for_each_cpu(i, sched_group_span(group)) { struct rq *rq = cpu_rq(i); unsigned int local; @@ -8859,12 +8865,12 @@ static inline void update_sg_wakeup_stat if (!nr_running && idle_cpu_without(i, p)) sgs->idle_cpus++; - } + /* Check if task fits in the CPU */ + if (sd->flags & SD_ASYM_CPUCAPACITY && + sgs->group_misfit_task_load && + task_fits_cpu(p, i)) + sgs->group_misfit_task_load = 0; - /* Check if task fits in the group */ - if (sd->flags & SD_ASYM_CPUCAPACITY && - !task_fits_capacity(p, group->sgc->max_capacity)) { - sgs->group_misfit_task_load = 1; } sgs->group_capacity = group->sgc->capacity; --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2468,6 +2468,15 @@ static inline bool uclamp_is_used(void) return static_branch_likely(&sched_uclamp_used); } #else /* CONFIG_UCLAMP_TASK */ +static inline unsigned long uclamp_eff_value(struct task_struct *p, + enum uclamp_id clamp_id) +{ + if (clamp_id == UCLAMP_MIN) + return 0; + + return SCHED_CAPACITY_SCALE; +} + static inline unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util, struct task_struct *p)