From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 672DF312836 for ; Fri, 5 Dec 2025 08:59:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.92.199 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764925169; cv=none; b=GCCi2KG//cy5+1PCZz30HN8nufAHO1GZgfPZJubVwxan5I+btV50RftC+PED8LVIfVAfG44XV/TLzuXkfJyQtK1C6TGu/3ysU+fNjUAb2ckez9Wf9NizQRFX7uEby5TwYh9lJMknmX9teGhmvFIbRknKntskJZDte+gCNgkrRLo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764925169; c=relaxed/simple; bh=4CYavKpH7xwpRlw2XFmh+0Tb8CwWzw+HS8gw6hWMXwc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=niSy21kYCHQ6i0eePE7e084XT2r00oDHSOKr7Ntx3L8+VRvkSta+7jRX8f8N2UEyYZe7NspynrA2QdZe83ehLyfZ4nR1WI63MzII5u7vJzTUla1MH7F8DzOWGv2QIwNygKjAvBfTY4B9/yKLlBGAZ1YDn6bTIXNH6I45H4SLXUs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=iXKfLYDu; arc=none smtp.client-ip=90.155.92.199 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="iXKfLYDu" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Tg/UxzPDYdklP11mY17nbsNFaoALRigcibREkaPV5cs=; b=iXKfLYDuM1PD10iXtum+VWNLGi TmNkeIbpr1/GOMisb20g9juxCLuqbqAbyXrcYxCCDF9Csa+D5I9GvybmWjI+CMveMXzM7aEwqVHRv 7XTo1T2JoDjroVqrSdc+Kh8Cyf5xvVeb+/3ICsw03Mdmh48Tl4l/zI8SYGBjpfMHTmUjgJaSghRn0 QnMGIaOZOxYFx004FRiH9v8o3w45mA7B3nbvP5AegjDYvvturtXRAHL8FyGbN8Dxj60lKv+Qp6QXf 3KbOaR9q+4DD/w0lI0TCOoMDTGEupWPxTkmD8Bgjo0oZI5LevGamqEjxFsOzws1C8OD5JzdTnx6I7 DNUrw2aA==; Received: from 2001-1c00-8d85-5700-266e-96ff-fe07-7dcc.cable.dynamic.v6.ziggo.nl ([2001:1c00:8d85:5700:266e:96ff:fe07:7dcc] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1vRQnC-00000005eyA-0han; Fri, 05 Dec 2025 08:03:54 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id E97DB300237; Fri, 05 Dec 2025 09:59:12 +0100 (CET) Date: Fri, 5 Dec 2025 09:59:12 +0100 From: Peter Zijlstra To: Vincent Guittot Cc: mingo@redhat.com, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org, pierre.gondois@arm.com, kprateek.nayak@amd.com, qyousef@layalina.io, hongyan.xia2@arm.com, christian.loehle@arm.com, luis.machado@arm.com Subject: Re: [PATCH 4/6 v8] sched/fair: Add push task mechanism for fair Message-ID: <20251205085912.GQ2528459@noisy.programming.kicks-ass.net> References: <20251202181242.1536213-1-vincent.guittot@linaro.org> <20251202181242.1536213-5-vincent.guittot@linaro.org> <20251204112947.GK2528459@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Thu, Dec 04, 2025 at 03:34:15PM +0100, Vincent Guittot wrote: > On Thu, 4 Dec 2025 at 12:29, Peter Zijlstra wrote: > > > > On Tue, Dec 02, 2025 at 07:12:40PM +0100, Vincent Guittot wrote: > > > +/* > > > + * See if the non running fair tasks on this rq can be sent on other CPUs > > > + * that fits better with their profile. > > > + */ > > > +static bool push_fair_task(struct rq *rq) > > > +{ > > > + struct task_struct *next_task; > > > + int prev_cpu, new_cpu; > > > + struct rq *new_rq; > > > + > > > + next_task = pick_next_pushable_fair_task(rq); > > > + if (!next_task) > > > + return false; > > > + > > > + if (is_migration_disabled(next_task)) > > > + return true; > > > + > > > + /* We might release rq lock */ > > > + get_task_struct(next_task); > > > + > > > + prev_cpu = rq->cpu; > > > + > > > + new_cpu = select_task_rq_fair(next_task, prev_cpu, 0); > > > + > > > + if (new_cpu == prev_cpu) > > > + goto out; > > > + > > > + new_rq = cpu_rq(new_cpu); > > > + > > > + if (double_lock_balance(rq, new_rq)) { > > > + /* The task has already migrated in between */ > > > + if (task_cpu(next_task) != rq->cpu) { > > > + double_unlock_balance(rq, new_rq); > > > + goto out; > > > + } > > > + > > > + deactivate_task(rq, next_task, 0); > > > + set_task_cpu(next_task, new_cpu); > > > + activate_task(new_rq, next_task, 0); > > > + > > > + resched_curr(new_rq); > > > + > > > + double_unlock_balance(rq, new_rq); > > > + } > > > > Why not use move_queued_task() ? > > double_lock_balance() can fail and prevent being blocked waiting for > new rq whereas move_queued_task() will wait, won't it ? > > Do you think move_queued_task() would be better ? No, double_lock_balance() never fails, the return value indicates if the currently held rq-lock, (the first argument) was unlocked while attaining both -- this is required when the first rq is a higher address than the second. double_lock_balance() also puts the wait-time and hold time of the second inside the hold time of the first, which gets you a quadric term in the rq hold times IIRC. Something that's best avoided. move_queued_task() OTOH takes the task off the runqueue you already hold locked, drops this lock, acquires the second, puts the task there, and returns with the dst rq locked. > In case of migrate_disable, push_fair_task() returns true and we > continue with the next task (It should not have much anyway). If the > task is migrate_disabled when we try to push it, we remove it from the > list anyway. At now, we try to not have more than 1 task in the list > to cap the overhead on sched_switch Right, clearly I needed more wake-up juice, I thought it returned false and would stick around.