From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2196048167C for ; Tue, 5 May 2026 15:21:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777994463; cv=none; b=oIqnPZ+2maYsd/9fxA9T9UXAESerCbh/Sqb9S6PK5c8FMIaAxprmyNdX115NSBzWQno800bvC4+QwVZUcfshzG6xnIO5YG2r//GaGNDi6vYuFm1EmNIeReekuA0wKDjNojZrPPMusHSGd7ATTP5gLxsQeMQbXvGBOltyXofFxFs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777994463; c=relaxed/simple; bh=2SSWB0+Ma1oNRNL67arzFzqyI0cDuVBwKwyjgC8koMo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=GBYufO24JBUyAMjTodxzi65G3RXNfeH76GEuBGvWyil2QKO8VF6Y86+/x9YiIQNoGKlkFkj5m6VxIoL+NiG0M/ZhobU3n5BxxPP9G3isjCoQV/qJl98abCOYvjOlDs4nPFEOWKbIwEbt+3cCFbQGcQZmGLUG9GICAwtVrSt5YP4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=qNa/qRu7; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qNa/qRu7" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ZE6oz4JgsdQKQT18BbFWnxBmMd5Kd3L25+8qR0PTwyg=; b=qNa/qRu7f98zTuDjPnA7vehW/P /XA5XfHMdGO5oaJ/TE7M5knWQv2L4eBUHF8/4FbhHxgAtkZ9ujifaczfEDPO5d9YIP3pkrae8zxok CuPTI1IuAnooP24Tmb2rNhm8lTOdem/on/EccEv8x7IkZmNIMzjt8+8DqKvBiua256Dv8ZhYEoAqr +3c3HVKhKyPsbRCjIEDtf+eHczkPCJB1QjbMqYFc8oSiuCsgy4rzZRG7WP5wj9SqIgr0rQM8chKyA v+r/GrR0L/laP9HQ/RmQTay4df9gBRz1W06gT7Cygn40SF2IKEIcTJA/l7YbCxvH/TKPCG69jJu/s tndPGpJA==; Received: from 2001-1c00-8d85-4b00-266e-96ff-fe07-7dcc.cable.dynamic.v6.ziggo.nl ([2001:1c00:8d85:4b00:266e:96ff:fe07:7dcc] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1wKHZv-00000002xHy-1V02; Tue, 05 May 2026 15:20:55 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id 114ED300324; Tue, 05 May 2026 17:20:54 +0200 (CEST) Date: Tue, 5 May 2026 17:20:54 +0200 From: Peter Zijlstra To: Yuri Andriaccio Cc: Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , linux-kernel@vger.kernel.org, Luca Abeni , Yuri Andriaccio Subject: Re: [RFC PATCH v5 22/29] sched/rt: Add rt-cgroup migration functions Message-ID: <20260505152054.GG3102624@noisy.programming.kicks-ass.net> References: <20260430213835.62217-1-yurand2000@gmail.com> <20260430213835.62217-23-yurand2000@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260430213835.62217-23-yurand2000@gmail.com> On Thu, Apr 30, 2026 at 11:38:26PM +0200, Yuri Andriaccio wrote: > +static int group_find_lowest_rt_rq(struct task_struct *task, struct rt_rq *task_rt_rq) > +{ > + struct sched_domain *sd; > + struct cpumask lowest_mask; > + struct sched_dl_entity *dl_se; > + struct rt_rq *rt_rq; > + int prio, lowest_prio; > + int cpu, this_cpu = smp_processor_id(); > + > + if (task->nr_cpus_allowed == 1) > + return -1; /* No other targets possible */ > + > + lowest_prio = task->prio - 1; > + cpumask_clear(&lowest_mask); > + for_each_cpu_and(cpu, cpu_online_mask, task->cpus_ptr) { > + dl_se = task_rt_rq->tg->dl_se[cpu]; > + rt_rq = &dl_se->my_q->rt; > + prio = rt_rq->highest_prio.curr; > + > + /* > + * If we're on asym system ensure we consider the different capacities > + * of the CPUs when searching for the lowest_mask. > + */ > + if (dl_se->dl_throttled || !rt_task_fits_capacity(task, cpu)) > + continue; > + > + if (prio >= lowest_prio) { > + if (prio > lowest_prio) { > + cpumask_clear(&lowest_mask); > + lowest_prio = prio; > + } > + > + cpumask_set_cpu(cpu, &lowest_mask); > + } > + } > + > + if (cpumask_empty(&lowest_mask)) > + return -1; > + > + /* > + * At this point we have built a mask of CPUs representing the > + * lowest priority tasks in the system. Now we want to elect > + * the best one based on our affinity and topology. > + * > + * We prioritize the last CPU that the task executed on since > + * it is most likely cache-hot in that location. > + */ > + cpu = task_cpu(task); > + if (cpumask_test_cpu(cpu, &lowest_mask)) > + return cpu; > + > + /* > + * Otherwise, we consult the sched_domains span maps to figure > + * out which CPU is logically closest to our hot cache data. > + */ > + if (!cpumask_test_cpu(this_cpu, &lowest_mask)) > + this_cpu = -1; /* Skip this_cpu opt if not among lowest */ > + > + scoped_guard(rcu) { > + for_each_domain(cpu, sd) { > + if (sd->flags & SD_WAKE_AFFINE) { > + int best_cpu; > + > + /* > + * "this_cpu" is cheaper to preempt than a > + * remote processor. > + */ > + if (this_cpu != -1 && > + cpumask_test_cpu(this_cpu, sched_domain_span(sd))) > + return this_cpu; > + > + best_cpu = cpumask_any_and_distribute(&lowest_mask, > + sched_domain_span(sd)); > + if (best_cpu < nr_cpu_ids) > + return best_cpu; > + } > + } > + } I appreciate you trying to save on indent, but this does violate coding-style, please indent as normal. > + > + /* > + * And finally, if there were no matches within the domains > + * just give the caller *something* to work with from the compatible > + * locations. > + */ > + if (this_cpu != -1) > + return this_cpu; > + > + cpu = cpumask_any_distribute(&lowest_mask); > + if (cpu < nr_cpu_ids) > + return cpu; > + > + return -1; > +}