From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 652FA3B7771 for ; Tue, 24 Mar 2026 19:13:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774379624; cv=none; b=WmHTgBpCJiAtBlm9o1DOiA3v++1CZaABj6Bw68TGxdAfjk4mebf8ZERCw9oBZOxNNOQ8Nobe4EKadceD4RC1+LtYa7kmkm2SVnjOsPvEBeXlGxnoUTgo56ipL7n5erWU5uk24T/oP5LtmaWsBsUKK+ETHuK+jpEqaiNhGCIc1VY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774379624; c=relaxed/simple; bh=ngCb/aQ4bDVXBkz5wCqEB3etCuqhzIu3JEQj3hyK+bc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=R+J2y/Q1cvTV5f78FcO7fRfUSjfnOZ8d346pk540mZ9RWnPXv7Z37B/yRbdBHRvcntYN0BlMZn6uZZlUgYGzrfnTTpIHRvxw2WaC54lhq8ZKp2zH81cSmbOt+2MBKyfw1U5gA1IMOa+l2o2z60eJzAXv8lx9WDSgKUXmpxn7U3I= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sQ4PjQ4X; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sQ4PjQ4X" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-359fe4e9ea7so5235836a91.0 for ; Tue, 24 Mar 2026 12:13:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774379623; x=1774984423; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=keD820Bd2VVPiFIRNkR+P92K+R/iH8crdu/iEvU+RlE=; b=sQ4PjQ4XoKnB1oVO0cdw7DJA1TbMMkquvUTS8SV7MpyKHCmkuvoxD05mJKrj9wuwM2 khpQBSyrqYx0RNojWX884wPI5wP/dikw/blB4muQLfmBERYC0wyL236cLR4uNHwqKRm2 z+NCQdWHZLdoRslcC8UOGCfJcPbjBD1dZ8nx6k+ewQG6lrQOt/zQYvgXfgem2KvpCJRZ DbwdjluW1eIdzcxtH7tJEgGoGn0ZZ4gxIcJZBe78L6gxEmpO0sjzmP9FwPbPW4H/sx6u Z2+jIeW9xpIZh4ePz3DY47BuTQ/+26vQlrMVQrR3oRO8k7IbOoJhmlTQFVU7NP/cxCAa dlzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774379623; x=1774984423; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=keD820Bd2VVPiFIRNkR+P92K+R/iH8crdu/iEvU+RlE=; b=feRNWOvP0CWkmW7TfdvFs3aQyZ1x+WO6CKaVW7mcjQOeWr+DzyoyPu666JjKS53DMl EDeE44fvLjPUH/MMiiEwxZZ7kZAKQpi+WBt6th/AtFcefxGYbdddxPLdKw2pFRflaOml Fv8K0GLtOa0nGhj3rJwmEF7euqHuhuR2LyZV+ACMXnMxYbjOXGBOKC6xRA6EjekrQiXY O6ytLbxw6szWwUJQ41YlwoPN82x1zTmiImYrTU4/Lud4rjPiVG9q1v77QpDlz2DVN4VQ n3FoYONKbulEa9Ec6AVCVxSgwBFpMZrAZWrwHtRq9qfbowPMQbsYcNK7d3Ff1mxQ++yc RRqA== X-Gm-Message-State: AOJu0Yw+ITtTZtf0XOORXqYtZKjBKDGFB5SAWP06L9Iai1a0J8J2TkgO ReLwLPBJhdDaktieUhY7JyC02yVfTLISGKelPjmJnftT+8VqQkJY1tvTRzerkt69UWfhuQGPl0Q VdWidJbDdsuiR8C5kIgd5WxcW4uMLvkQ0fGuv6f7fR4mRT1oMnOMjxMGkuNz/B4jlHQGv3eKMKc thCUBRmuy4oDJA8CIgKzSQPsxzloBL55tFX5L3aWCSzW/nFwAj X-Received: from pjuj3.prod.google.com ([2002:a17:90a:d003:b0:35b:98ff:2e86]) (user=jstultz job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4a4f:b0:359:fd9a:c50c with SMTP id 98e67ed59e1d1-35c0ddb20b3mr434474a91.22.1774379622193; Tue, 24 Mar 2026 12:13:42 -0700 (PDT) Date: Tue, 24 Mar 2026 19:13:16 +0000 In-Reply-To: <20260324191337.1841376-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260324191337.1841376-1-jstultz@google.com> X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260324191337.1841376-2-jstultz@google.com> Subject: [PATCH v26 01/10] sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr() From: John Stultz To: LKML Cc: John Stultz , K Prateek Nayak , Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Content-Type: text/plain; charset="UTF-8" With proxy-execution, the scheduler selects the donor, but for blocked donors, we end up running the lock owner. This caused some complexity, because the class schedulers make sure to remove the task they pick from their pushable task lists, which prevents the donor from being migrated, but there wasn't then anything to prevent rq->curr from being migrated if rq->curr != rq->donor. This was sort of hacked around by calling proxy_tag_curr() on the rq->curr task if we were running something other then the donor. proxy_tag_curr() did a dequeue/enqueue pair on the rq->curr task, allowing the class schedulers to remove it from their pushable list. The dequeue/enqueue pair was wasteful, and additonally K Prateek highlighted that we didn't properly undo things when we stopped proxying, leaving the lock owner off the pushable list. After some alternative approaches were considered, Peter suggested just having the RT/DL classes just avoid migrating when task_on_cpu(). So rework pick_next_pushable_dl_task() and the rt pick_next_pushable_task() functions so that they skip over the first pushable task if it is on_cpu. Then just drop all of the proxy_tag_curr() logic. Fixes: be39617e38e0 ("sched: Fix proxy/current (push,pull)ability") Reported-by: K Prateek Nayak Closes: https://lore.kernel.org/lkml/e735cae0-2cc9-4bae-b761-fcb082ed3e94@amd.com/ Suggested-by: Peter Zijlstra Signed-off-by: John Stultz --- v26: * Fix issue Juri noticed by using a separate iterator value in pick_next_pusahble_task_dl() Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Mel Gorman Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: Daniel Lezcano Cc: Suleiman Souhlal Cc: kuyo chang Cc: hupu Cc: kernel-team@android.com --- kernel/sched/core.c | 24 ------------------------ kernel/sched/deadline.c | 18 ++++++++++++++++-- kernel/sched/rt.c | 15 ++++++++++++--- 3 files changed, 28 insertions(+), 29 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 496dff740dcaf..92b1807c05a4e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6705,23 +6705,6 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf) } #endif /* SCHED_PROXY_EXEC */ -static inline void proxy_tag_curr(struct rq *rq, struct task_struct *owner) -{ - if (!sched_proxy_exec()) - return; - /* - * pick_next_task() calls set_next_task() on the chosen task - * at some point, which ensures it is not push/pullable. - * However, the chosen/donor task *and* the mutex owner form an - * atomic pair wrt push/pull. - * - * Make sure owner we run is not pushable. Unfortunately we can - * only deal with that by means of a dequeue/enqueue cycle. :-/ - */ - dequeue_task(rq, owner, DEQUEUE_NOCLOCK | DEQUEUE_SAVE); - enqueue_task(rq, owner, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE); -} - /* * __schedule() is the main scheduler function. * @@ -6874,9 +6857,6 @@ static void __sched notrace __schedule(int sched_mode) */ RCU_INIT_POINTER(rq->curr, next); - if (!task_current_donor(rq, next)) - proxy_tag_curr(rq, next); - /* * The membarrier system call requires each architecture * to have a full memory barrier after updating @@ -6910,10 +6890,6 @@ static void __sched notrace __schedule(int sched_mode) /* Also unlocks the rq: */ rq = context_switch(rq, prev, next, &rf); } else { - /* In case next was already curr but just got blocked_donor */ - if (!task_current_donor(rq, next)) - proxy_tag_curr(rq, next); - rq_unpin_lock(rq, &rf); __balance_callbacks(rq, NULL); raw_spin_rq_unlock_irq(rq); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index d08b004293234..52c524f5ba4dd 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2801,12 +2801,26 @@ static int find_later_rq(struct task_struct *task) static struct task_struct *pick_next_pushable_dl_task(struct rq *rq) { - struct task_struct *p; + struct task_struct *i, *p = NULL; + struct rb_node *next_node; if (!has_pushable_dl_tasks(rq)) return NULL; - p = __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root)); + next_node = rb_first_cached(&rq->dl.pushable_dl_tasks_root); + while (next_node) { + i = __node_2_pdl(next_node); + /* make sure task isn't on_cpu (possible with proxy-exec) */ + if (!task_on_cpu(rq, i)) { + p = i; + break; + } + + next_node = rb_next(next_node); + } + + if (!p) + return NULL; WARN_ON_ONCE(rq->cpu != task_cpu(p)); WARN_ON_ONCE(task_current(rq, p)); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index f69e1f16d9238..61569b622d1a3 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1853,13 +1853,22 @@ static int find_lowest_rq(struct task_struct *task) static struct task_struct *pick_next_pushable_task(struct rq *rq) { - struct task_struct *p; + struct plist_head *head = &rq->rt.pushable_tasks; + struct task_struct *i, *p = NULL; if (!has_pushable_tasks(rq)) return NULL; - p = plist_first_entry(&rq->rt.pushable_tasks, - struct task_struct, pushable_tasks); + plist_for_each_entry(i, head, pushable_tasks) { + /* make sure task isn't on_cpu (possible with proxy-exec) */ + if (!task_on_cpu(rq, i)) { + p = i; + break; + } + } + + if (!p) + return NULL; BUG_ON(rq->cpu != task_cpu(p)); BUG_ON(task_current(rq, p)); -- 2.53.0.1018.g2bb0e51243-goog