From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1883171AE for ; Mon, 8 May 2023 11:44:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3026AC433D2; Mon, 8 May 2023 11:44:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1683546278; bh=hRkAaEkv3lVaggNkPQLPbjCh9tEkRbA16XgV7adRIR4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rBfB7sCcOL7kJkqmaocsGiNNmOQshfRKhYIpaaU2IPsB+8YRwobXPu/s240zlz/ZA 4Zx30QJr4PhhyTDJbQiyp33QtT060YLFQE89r29pI9vi6HuE//GcsoXiPYgG5wP6AD P/aVJdCq5u+uhIkOiJf4QkiucrQIlW6sPnwSbq3o= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Daniel Jordan , Libo Chen , "Peter Zijlstra (Intel)" , "Gautham R. Shenoy" , Sasha Levin Subject: [PATCH 5.15 288/371] sched/fair: Fix inaccurate tally of ttwu_move_affine Date: Mon, 8 May 2023 11:48:09 +0200 Message-Id: <20230508094823.455039458@linuxfoundation.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230508094811.912279944@linuxfoundation.org> References: <20230508094811.912279944@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Libo Chen [ Upstream commit 39afe5d6fc59237ff7738bf3ede5a8856822d59d ] There are scenarios where non-affine wakeups are incorrectly counted as affine wakeups by schedstats. When wake_affine_idle() returns prev_cpu which doesn't equal to nr_cpumask_bits, it will slip through the check: target == nr_cpumask_bits in wake_affine() and be counted as if target == this_cpu in schedstats. Replace target == nr_cpumask_bits with target != this_cpu to make sure affine wakeups are accurately tallied. Fixes: 806486c377e33 (sched/fair: Do not migrate if the prev_cpu is idle) Suggested-by: Daniel Jordan Signed-off-by: Libo Chen Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Gautham R. Shenoy Link: https://lore.kernel.org/r/20220810223313.386614-1-libo.chen@oracle.com Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2dd67e212f0ac..646a6ae4b2509 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6207,7 +6207,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync); schedstat_inc(p->stats.nr_wakeups_affine_attempts); - if (target == nr_cpumask_bits) + if (target != this_cpu) return prev_cpu; schedstat_inc(sd->ttwu_move_affine); -- 2.39.2