From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E8A8C6FD18 for ; Tue, 18 Apr 2023 12:36:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231732AbjDRMgd (ORCPT ); Tue, 18 Apr 2023 08:36:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231723AbjDRMgc (ORCPT ); Tue, 18 Apr 2023 08:36:32 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4D0712C80 for ; Tue, 18 Apr 2023 05:36:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 406E163289 for ; Tue, 18 Apr 2023 12:36:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A66FC433EF; Tue, 18 Apr 2023 12:36:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1681821390; bh=MXeqlNyXA7o7vUB1/hQxXX+F+UIf88IioKWB/+anQDU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m6aDHItVBJES8k4ahf/o8n3bJqTom2CdRKSAr3In7QzHbG0/AVpwL6xVicpk9Be2t goO8bzBVgiPplPm2LX7MSG4r5zkuuYuZOrjoSae0XZWu7omklJKuw9/raSG7zZVoy8 QZMZNXfvEVcSNRhGRFMDcJVd0H2pbXcrmkKvo9+0= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Tingjia Cao , Vincent Guittot , "Peter Zijlstra (Intel)" , Sasha Levin Subject: [PATCH 5.10 108/124] sched/fair: Fix imbalance overflow Date: Tue, 18 Apr 2023 14:22:07 +0200 Message-Id: <20230418120313.729875424@linuxfoundation.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230418120309.539243408@linuxfoundation.org> References: <20230418120309.539243408@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Vincent Guittot [ Upstream commit 91dcf1e8068e9a8823e419a7a34ff4341275fb70 ] When local group is fully busy but its average load is above system load, computing the imbalance will overflow and local group is not the best target for pulling this load. Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()") Reported-by: Tingjia Cao Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Tested-by: Tingjia Cao Link: https://lore.kernel.org/lkml/CABcWv9_DAhVBOq2=W=2ypKE9dKM5s2DvoV8-U0+GDwwuKZ89jQ@mail.gmail.com/T/ Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 22139e97b2a8e..57a58bc48021a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9353,6 +9353,16 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) / sds->total_capacity; + + /* + * If the local group is more loaded than the average system + * load, don't try to pull any tasks. + */ + if (local->avg_load >= sds->avg_load) { + env->imbalance = 0; + return; + } + } /* -- 2.39.2