From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8FFEC7EE23 for ; Mon, 12 Jun 2023 12:06:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232321AbjFLMGk (ORCPT ); Mon, 12 Jun 2023 08:06:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229464AbjFLMGi (ORCPT ); Mon, 12 Jun 2023 08:06:38 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 746E98F for ; Mon, 12 Jun 2023 05:06:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=qLyIfMrltQo4mx+ae3hm5XlMuKTGmLlbfut2SxtVFvk=; b=fzLkX0e9EKBajzlYWiZkh7t5p3 l8+cv8j+JxAE5D1QOE2t+D64Ec1LiUYmn+n/e1FlcKWPbrZf1o+zbwzkD3Xx+jkE23Mlbbrpl/w9G nCtqVDrWDnKo4PrMQhmitGyrg8ZTfjYUbliSyHgUMWu5Z8CJ9dVDFBHJvXL+/Eq/MjuGmpwc/0ftg GDY7ByRlV/jdfcdpbt1R18OY6yhEBTt3WU3GZjcCrTB+DyN+lgItBxAw2TWK7D7jsh3MAMnIM4EyP nXSRm8yVikRiWgi9czm+tnQqrsTlhB16kXd7l9J7/b/0zsk1Bd3sXAEbCGpOn8o8cqO8zxm7r0O5/ /zGvC9lA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8gId-002bUm-Uc; Mon, 12 Jun 2023 12:05:32 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 34568300322; Mon, 12 Jun 2023 14:05:29 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 11C602451DA8E; Mon, 12 Jun 2023 14:05:29 +0200 (CEST) Date: Mon, 12 Jun 2023 14:05:28 +0200 From: Peter Zijlstra To: Tim Chen Cc: Juri Lelli , Vincent Guittot , Ricardo Neri , "Ravi V . Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J . Wysocki" , Srinivas Pandruvada , Steven Rostedt , Valentin Schneider , Ionela Voinescu , x86@kernel.org, linux-kernel@vger.kernel.org, Shrikanth Hegde , Srikar Dronamraju , naveen.n.rao@linux.vnet.ibm.com, Yicong Yang , Barry Song , Chen Yu , Hillf Danton Subject: Re: [Patch v2 3/6] sched/fair: Implement prefer sibling imbalance calculation between asymmetric groups Message-ID: <20230612120528.GL4253@hirez.programming.kicks-ass.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 08, 2023 at 03:32:29PM -0700, Tim Chen wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 03573362274f..0b0904263d51 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9372,6 +9372,65 @@ static inline bool smt_balance(struct lb_env *env, struct sg_lb_stats *sgs, > return false; > } > > +static inline long sibling_imbalance(struct lb_env *env, > + struct sd_lb_stats *sds, > + struct sg_lb_stats *busiest, > + struct sg_lb_stats *local) > +{ > + int ncores_busiest, ncores_local; > + long imbalance; > + > + if (env->idle == CPU_NOT_IDLE) > + return 0; > + > + ncores_busiest = sds->busiest->cores; > + ncores_local = sds->local->cores; > + > + if (ncores_busiest == ncores_local && > + (!(env->sd->flags & SD_ASYM_PACKING) || > + sched_asym_equal(env->dst_cpu, > + sds->busiest->asym_prefer_cpu))) { > + imbalance = busiest->sum_nr_running; > + lsub_positive(&imbalance, local->sum_nr_running); > + return imbalance; > + } > + > + /* Balance such that nr_running/ncores ratio are same on both groups */ > + imbalance = ncores_local * busiest->sum_nr_running; > + lsub_positive(&imbalance, ncores_busiest * local->sum_nr_running); > + /* Normalize imbalance to become tasks to be moved to restore balance */ > + imbalance /= ncores_local + ncores_busiest; > + > + if (env->sd->flags & SD_ASYM_PACKING) { > + int limit; > + > + if (!busiest->sum_nr_running) > + goto out; This seems out-of-place, shouldn't we have terminate sooner if busiest is empty? > + > + if (sched_asym_prefer(env->dst_cpu, sds->busiest->asym_prefer_cpu)) { > + /* Don't leave preferred core idle */ > + if (imbalance == 0 && local->sum_nr_running < ncores_local) > + imbalance = 1; > + goto out; > + } > + > + /* Limit tasks moved from preferred group, don't leave cores idle */ > + limit = busiest->sum_nr_running; > + lsub_positive(&limit, ncores_busiest); > + if (imbalance > limit) > + imbalance = limit; How does this affect the server parts that have larger than single core turbo domains? > + > + goto out; > + } > + > + /* Take advantage of resource in an empty sched group */ > + if (imbalance == 0 && local->sum_nr_running == 0 && > + busiest->sum_nr_running > 1) > + imbalance = 1; > +out: > + return imbalance << 1; > +} But basically you have: LcBn - BcLn imb = ----------- LcBc Which makes sense, except you then return: imb * 2 which then made me wonder about rounding. Do we want to to add (LcBc -1) or (LcBc/2) to resp. ceil() or round() the thing before division? Because currently it uses floor(). If you evaludate it like: 2 * (LcBn - BcLn) imb = ----------------- LcBc The result is different from what you have now. What actual behaviour is desired in these low imbalance cases? and can you write a comment as to why we do as we do etc..?