From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A232832571D for ; Wed, 6 May 2026 13:10:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778073032; cv=none; b=D/S/LgXou4G6XRTG9SQETobQVjX7neKvs0sYCEkkDErY1yR6u5qNz3K+Byjyz4Npbo4nExItaBqs0t/2ZeP0dIuWh5sT2mla78+qdHSiUb0MgYppIlL72Qrk5MK/ph6y/wDAXsNqyjzfOzmFmLd/tO4ek8EgsnyySUjKlYjz8Ns= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778073032; c=relaxed/simple; bh=wOTkJJZl6wAYF8Mq2fGs2sGNjscPnigIxYhQ2kCmG1A=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=q+U+88974PbU3nnwCM5bqY3vj7MbF5QQklv0EnfOxnvceD436CJ2eQzdlmQO7ol0Ip4S7rV25swPCuL3eNIVp7NJJl3KIyCvazWZyR482O4yhTKoquEoJ3FwC081QqI2FuwMzWvWuM7UHxyU45EqkrIAKeMKus/HZSRE3s4PFUY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=NKpfegH0; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="NKpfegH0" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 38F11169C; Wed, 6 May 2026 06:10:23 -0700 (PDT) Received: from [10.57.92.61] (unknown [10.57.92.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 16AD53F7B4; Wed, 6 May 2026 06:10:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778073028; bh=wOTkJJZl6wAYF8Mq2fGs2sGNjscPnigIxYhQ2kCmG1A=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=NKpfegH0kGExy04mJ2XNrZKJtOBZ4JDVC3GDnfTLrYbhXF3n7jrIey8kd2kOjUl+v azTbg/wfZFBPSbbKXiWeO2Sdk4sEB6NFmrvLYjYB+NFY2gPnFUnPNGtsrYInMTGnF1 VcG278xu5qOfS0JcN2zXJCrn4mgWtxYpGfDgzVAI= Message-ID: <0171d56e-c8b3-4a17-85a5-93ac407aae5f@arm.com> Date: Wed, 6 May 2026 14:10:22 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 3/4] sched/fair: Allow load balancing between CPUs of identical capacity To: Ricardo Neri , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Tim C Chen , Chen Yu , Barry Song Cc: "Rafael J. Wysocki" , Len Brown , ricardo.neri@intel.com, linux-kernel@vger.kernel.org References: <20260429-rneri-fix-cas-clusters-v2-0-cd787de35cc6@linux.intel.com> <20260429-rneri-fix-cas-clusters-v2-3-cd787de35cc6@linux.intel.com> Content-Language: en-US From: Christian Loehle In-Reply-To: <20260429-rneri-fix-cas-clusters-v2-3-cd787de35cc6@linux.intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 4/29/26 22:19, Ricardo Neri wrote: > sched_balance_find_src_rq() avoids selecting a runqueue with a single > running task as busiest if doing so results in migrating the task to a > CPU with less than ~5% of extra capacity. It also unintentionally > prevents migrations between CPUs of identical capacity. > > When CONFIG_SCHED_CLUSTER is enabled, load should be balanced across > clusters of CPUs with the same capacity. Allowing migration between CPUs > of identical capacity is necessary to meet this goal. > > We are interested in the architectural capacity of the involved CPUs, > excluding any reductions due to side activity or thermal pressure. Use > arch_scale_cpu_capacity(). > > While here, invert the check for runtime capacity for clarity. > > Signed-off-by: Ricardo Neri > --- > Changes since v1: > * Used arch_scale_cpu_capacity() instead of capacity_of() to ignore > runtime variability. > * Inverted the check for runtime capacity. (Christian) > * Reworded patch description for clarity. > --- > kernel/sched/fair.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 166a5b109e0e..4105717e64fe 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -11816,9 +11816,14 @@ static struct rq *sched_balance_find_src_rq(struct lb_env *env, > * eventually lead to active_balancing high->low capacity. > * Higher per-CPU capacity is considered better than balancing > * average load. > + * > + * Cluster scheduling requires balancing load across clusters > + * of identical capacity. Use architectural capacity to ignore > + * runtime variability. > */ > if (env->sd->flags & SD_ASYM_CPUCAPACITY && > - !capacity_greater(capacity_of(env->dst_cpu), capacity) && > + arch_scale_cpu_capacity(env->dst_cpu) != arch_scale_cpu_capacity(i) && > + capacity_greater(capacity, capacity_of(env->dst_cpu)) && > nr_running == 1) > continue; > > I wonder if we shouldn't use capacity_greater() margin for both, i.e. capacity_greater(arch_scale_cpu_capacity(i), arch_scale_cpu_capacity(env->dst_cpu)) && For example the orion o6 has a cluster with 1024 and one with 984, If we allow balancing 984->984 I think it's only consistent to also allow 984->1024.