All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Valentin Schneider <vschneid@redhat.com>,
	K Prateek Nayak <kprateek.nayak@amd.com>,
	Kyle McMartin <jkkm@meta.com>,
	linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH sched/core] sched/rt: Fix RT_PUSH_IPI soft lockup loop
Date: Thu, 7 May 2026 16:14:37 +0200	[thread overview]
Message-ID: <20260507141437.GJ3102624@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <20260506235716.2530720-1-tj@kernel.org>

On Wed, May 06, 2026 at 01:57:16PM -1000, Tejun Heo wrote:
> push_rt_task() picks the highest pushable RT task next_task. If it
> outranks rq->donor, the existing path calls resched_curr() and
> returns 0, trusting local schedule() to pick next_task soon.
> 
> The RT_PUSH_IPI relay caller (rto_push_irq_work_func()) cannot rely
> on that. When this CPU has a steady supply of softirq work (e.g.,
> incoming packets), the next push IPI arrives before schedule() can
> run. Other CPUs keep seeing this CPU as overloaded and keep sending
> IPIs, this CPU keeps taking the same bail, and the loop repeats
> until soft lockup.
> 
> Seen in production on hosts with sustained NET_RX softirq load:
> the loop ran millions of iterations before tripping the soft-lockup
> watchdog.
> 
> Skip the prio bail when called via the IPI relay (pull=true) so
> push_rt_task() migrates next_task to another CPU. Verified with a
> synthetic reproducer.
> 
> Fixes: b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
> Cc: Kyle McMartin <jkkm@meta.com>
> Cc: stable@vger.kernel.org # v5.10+
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
> This looks minimal to me, but happy for suggestions. Thanks.
> 
>  kernel/sched/rt.c |    8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1968,8 +1968,14 @@ retry:
>  	 * It's possible that the next_task slipped in of
>  	 * higher priority than current. If that's the case
>  	 * just reschedule current.
> +	 *
> +	 * This doesn't work for the IPI relay caller (pull). When this CPU
> +	 * has a steady supply of softirq work (e.g., incoming packets), the
> +	 * next push IPI arrives before schedule() can run. Other CPUs keep
> +	 * seeing it as overloaded and keep sending IPIs, this CPU keeps
> +	 * taking the same bail, and the loop repeats until soft lockup.
>  	 */
> -	if (unlikely(next_task->prio < rq->donor->prio)) {
> +	if (unlikely(next_task->prio < rq->donor->prio) && !pull) {
>  		resched_curr(rq);
>  		return 0;
>  	}

IIRC Steve has a test for this stuff. If this breaks things, an
alternative is keeping a counter/limit on attempts or something.


--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1339,6 +1339,8 @@ struct rq {
 	unsigned int		nr_pinned;
 	unsigned int		push_busy;
 	struct cpu_stop_work	push_work;
+	unsigned int		rt_switches;
+	unsigned int		rt_push_resched;
 
 #ifdef CONFIG_SCHED_CORE
 	/* per rq */
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2941,6 +2941,13 @@ static int push_dl_task(struct rq *rq)
 	if (dl_task(rq->donor) &&
 	    dl_time_before(next_task->dl.deadline, rq->donor->dl.deadline) &&
 	    rq->curr->nr_cpus_allowed > 1) {
+		if (rq->rt_switches != rq->nr_switches) {
+			rq->rt_switches = rq->nr_switches;
+			rq->rt_push_resched = 0;
+		}
+		if (test_tsk_need_resched(rq->curr) && ++rq->rt_push_resched > 16)
+			return 1;
+
 		resched_curr(rq);
 		return 0;
 	}

  reply	other threads:[~2026-05-07 14:14 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-06 23:57 [PATCH sched/core] sched/rt: Fix RT_PUSH_IPI soft lockup loop Tejun Heo
2026-05-07 14:14 ` Peter Zijlstra [this message]
2026-05-11 19:33   ` Tejun Heo
2026-05-12 15:37   ` Steven Rostedt
2026-05-12 18:07     ` Tejun Heo
2026-05-12 21:28       ` Steven Rostedt
2026-05-13 19:39         ` Tejun Heo
2026-05-14  0:24           ` Steven Rostedt
2026-05-14  0:53             ` Tejun Heo
2026-05-14  1:31               ` Steven Rostedt
2026-05-14  1:42                 ` Tejun Heo
2026-05-14  2:01                   ` Steven Rostedt
2026-05-14  4:48                     ` Tejun Heo
2026-05-14 14:03                       ` Steven Rostedt
2026-05-14 21:15                         ` Tejun Heo
2026-05-14 23:43                           ` Steven Rostedt
2026-05-12 20:10     ` Valentin Schneider

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260507141437.GJ3102624@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=jkkm@meta.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=stable@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.