From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9325C7618E for ; Mon, 24 Apr 2023 13:35:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232462AbjDXNfW (ORCPT ); Mon, 24 Apr 2023 09:35:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232512AbjDXNfB (ORCPT ); Mon, 24 Apr 2023 09:35:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 014688A52 for ; Mon, 24 Apr 2023 06:34:43 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C749A62397 for ; Mon, 24 Apr 2023 13:34:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBD1AC433EF; Mon, 24 Apr 2023 13:34:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1682343283; bh=1r+ep6+5HhdZvrtZXGLC4VUoYJgSZoYrhwJSVy++huM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Itzu4N6nugRstpey56m7WdQqoBybNUv91IJHjJF/TsZSixQeU2Q84zFdpRdRlbPFg JMmTyE+6Sch/j5waL5V5l62SdXpIB6tvQIKWYPElYTO1O3xaeoLrTxWhNDxfG22SsZ tHcB+Ll/K7ULMvC1K/ouUvTr9yx0+eZ9Xbribh5w= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Qais Yousef , "Peter Zijlstra (Intel)" , "Qais Yousef (Google)" , Vincent Guittot Subject: [PATCH 5.10 36/68] sched/uclamp: Make task_fits_capacity() use util_fits_cpu() Date: Mon, 24 Apr 2023 15:18:07 +0200 Message-Id: <20230424131129.054252367@linuxfoundation.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230424131127.653885914@linuxfoundation.org> References: <20230424131127.653885914@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Qais Yousef commit b48e16a69792b5dc4a09d6807369d11b2970cc36 upstream. So that the new uclamp rules in regard to migration margin and capacity pressure are taken into account correctly. Fixes: a7008c07a568 ("sched/fair: Make task_fits_capacity() consider uclamp restrictions") Co-developed-by: Vincent Guittot Signed-off-by: Qais Yousef Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20220804143609.515789-3-qais.yousef@arm.com (cherry picked from commit b48e16a69792b5dc4a09d6807369d11b2970cc36) Signed-off-by: Qais Yousef (Google) Signed-off-by: Greg Kroah-Hartman --- kernel/sched/fair.c | 26 ++++++++++++++++---------- kernel/sched/sched.h | 9 +++++++++ 2 files changed, 25 insertions(+), 10 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4197,10 +4197,12 @@ static inline int util_fits_cpu(unsigned return fits; } -static inline int task_fits_capacity(struct task_struct *p, - unsigned long capacity) +static inline int task_fits_cpu(struct task_struct *p, int cpu) { - return fits_capacity(uclamp_task_util(p), capacity); + unsigned long uclamp_min = uclamp_eff_value(p, UCLAMP_MIN); + unsigned long uclamp_max = uclamp_eff_value(p, UCLAMP_MAX); + unsigned long util = task_util_est(p); + return util_fits_cpu(util, uclamp_min, uclamp_max, cpu); } static inline void update_misfit_status(struct task_struct *p, struct rq *rq) @@ -4213,7 +4215,7 @@ static inline void update_misfit_status( return; } - if (task_fits_capacity(p, capacity_of(cpu_of(rq)))) { + if (task_fits_cpu(p, cpu_of(rq))) { rq->misfit_task_load = 0; return; } @@ -7942,7 +7944,7 @@ static int detach_tasks(struct lb_env *e case migrate_misfit: /* This is not a misfit task */ - if (task_fits_capacity(p, capacity_of(env->src_cpu))) + if (task_fits_cpu(p, env->src_cpu)) goto next; env->imbalance = 0; @@ -8884,6 +8886,10 @@ static inline void update_sg_wakeup_stat memset(sgs, 0, sizeof(*sgs)); + /* Assume that task can't fit any CPU of the group */ + if (sd->flags & SD_ASYM_CPUCAPACITY) + sgs->group_misfit_task_load = 1; + for_each_cpu(i, sched_group_span(group)) { struct rq *rq = cpu_rq(i); unsigned int local; @@ -8903,12 +8909,12 @@ static inline void update_sg_wakeup_stat if (!nr_running && idle_cpu_without(i, p)) sgs->idle_cpus++; - } + /* Check if task fits in the CPU */ + if (sd->flags & SD_ASYM_CPUCAPACITY && + sgs->group_misfit_task_load && + task_fits_cpu(p, i)) + sgs->group_misfit_task_load = 0; - /* Check if task fits in the group */ - if (sd->flags & SD_ASYM_CPUCAPACITY && - !task_fits_capacity(p, group->sgc->max_capacity)) { - sgs->group_misfit_task_load = 1; } sgs->group_capacity = group->sgc->capacity; --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2468,6 +2468,15 @@ static inline bool uclamp_is_used(void) return static_branch_likely(&sched_uclamp_used); } #else /* CONFIG_UCLAMP_TASK */ +static inline unsigned long uclamp_eff_value(struct task_struct *p, + enum uclamp_id clamp_id) +{ + if (clamp_id == UCLAMP_MIN) + return 0; + + return SCHED_CAPACITY_SCALE; +} + static inline unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util, struct task_struct *p)